=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.07592ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.070926504s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007354566s
addons_test.go:338: (dbg) Run: kubectl --context addons-489802 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.098664792s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-489802 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p addons-489802 ip
2024/09/20 16:56:57 [DEBUG] GET http://192.168.39.89:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p addons-489802 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-489802 -n addons-489802
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-489802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489802 logs -n 25: (2.300461072s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | |
| | -p download-only-858543 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| delete | -p download-only-858543 | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| start | -o=json --download-only | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | -p download-only-349545 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| delete | -p download-only-349545 | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| delete | -p download-only-858543 | download-only-858543 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| delete | -p download-only-349545 | download-only-349545 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| start | --download-only -p | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | binary-mirror-811854 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:34057 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-811854 | binary-mirror-811854 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| addons | disable dashboard -p | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | addons-489802 | | | | | |
| addons | enable dashboard -p | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| | addons-489802 | | | | | |
| start | -p addons-489802 --wait=true | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:47 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | enable headlamp | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
| | -p addons-489802 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable cloud-spanner -p | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
| | addons-489802 | | | | | |
| addons | disable nvidia-device-plugin | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:55 UTC |
| | -p addons-489802 | | | | | |
| addons | addons-489802 addons disable | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-489802 addons disable | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:55 UTC | 20 Sep 24 16:56 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | addons-489802 | | | | | |
| ssh | addons-489802 ssh cat | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | /opt/local-path-provisioner/pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98_default_test-pvc/file1 | | | | | |
| addons | addons-489802 addons disable | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-489802 ssh curl -s | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-489802 ip | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| addons | addons-489802 addons disable | addons-489802 | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 16:44:18
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 16:44:18.178711 16686 out.go:345] Setting OutFile to fd 1 ...
I0920 16:44:18.178820 16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:18.178830 16686 out.go:358] Setting ErrFile to fd 2...
I0920 16:44:18.178837 16686 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:18.179018 16686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8777/.minikube/bin
I0920 16:44:18.179615 16686 out.go:352] Setting JSON to false
I0920 16:44:18.180405 16686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1601,"bootTime":1726849057,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0920 16:44:18.180501 16686 start.go:139] virtualization: kvm guest
I0920 16:44:18.182896 16686 out.go:177] * [addons-489802] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0920 16:44:18.184216 16686 notify.go:220] Checking for updates...
I0920 16:44:18.184222 16686 out.go:177] - MINIKUBE_LOCATION=19672
I0920 16:44:18.185469 16686 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 16:44:18.186874 16686 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19672-8777/kubeconfig
I0920 16:44:18.188324 16686 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8777/.minikube
I0920 16:44:18.190351 16686 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 16:44:18.191922 16686 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 16:44:18.193502 16686 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 16:44:18.225366 16686 out.go:177] * Using the kvm2 driver based on user configuration
I0920 16:44:18.226431 16686 start.go:297] selected driver: kvm2
I0920 16:44:18.226443 16686 start.go:901] validating driver "kvm2" against <nil>
I0920 16:44:18.226453 16686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 16:44:18.227135 16686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 16:44:18.227230 16686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-8777/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 16:44:18.242065 16686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
I0920 16:44:18.242112 16686 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 16:44:18.242404 16686 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:44:18.242437 16686 cni.go:84] Creating CNI manager for ""
I0920 16:44:18.242490 16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 16:44:18.242500 16686 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 16:44:18.242555 16686 start.go:340] cluster config:
{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:18.242664 16686 iso.go:125] acquiring lock: {Name:mkba95ef0488e46f622333e9f317f43def93040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 16:44:18.244379 16686 out.go:177] * Starting "addons-489802" primary control-plane node in "addons-489802" cluster
I0920 16:44:18.245561 16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 16:44:18.245610 16686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
I0920 16:44:18.245618 16686 cache.go:56] Caching tarball of preloaded images
I0920 16:44:18.245687 16686 preload.go:172] Found /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0920 16:44:18.245698 16686 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
I0920 16:44:18.246011 16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
I0920 16:44:18.246032 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json: {Name:mka75e2e382f021a76fc6885b0195d64c12ed744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:18.246164 16686 start.go:360] acquireMachinesLock for addons-489802: {Name:mkfeedb385cf08b5d2aa00913e85815d02a180c2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 16:44:18.246208 16686 start.go:364] duration metric: took 31.448µs to acquireMachinesLock for "addons-489802"
I0920 16:44:18.246223 16686 start.go:93] Provisioning new machine with config: &{Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0920 16:44:18.246282 16686 start.go:125] createHost starting for "" (driver="kvm2")
I0920 16:44:18.247940 16686 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0920 16:44:18.248080 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:44:18.248117 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:44:18.262329 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
I0920 16:44:18.262809 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:44:18.263337 16686 main.go:141] libmachine: Using API Version 1
I0920 16:44:18.263357 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:44:18.263710 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:44:18.263878 16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
I0920 16:44:18.263996 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:18.264148 16686 start.go:159] libmachine.API.Create for "addons-489802" (driver="kvm2")
I0920 16:44:18.264173 16686 client.go:168] LocalClient.Create starting
I0920 16:44:18.264205 16686 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem
I0920 16:44:18.669459 16686 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem
I0920 16:44:18.951878 16686 main.go:141] libmachine: Running pre-create checks...
I0920 16:44:18.951905 16686 main.go:141] libmachine: (addons-489802) Calling .PreCreateCheck
I0920 16:44:18.952422 16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
I0920 16:44:18.952871 16686 main.go:141] libmachine: Creating machine...
I0920 16:44:18.952893 16686 main.go:141] libmachine: (addons-489802) Calling .Create
I0920 16:44:18.953060 16686 main.go:141] libmachine: (addons-489802) Creating KVM machine...
I0920 16:44:18.954192 16686 main.go:141] libmachine: (addons-489802) DBG | found existing default KVM network
I0920 16:44:18.954932 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:18.954771 16708 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
I0920 16:44:18.954987 16686 main.go:141] libmachine: (addons-489802) DBG | created network xml:
I0920 16:44:18.955015 16686 main.go:141] libmachine: (addons-489802) DBG | <network>
I0920 16:44:18.955034 16686 main.go:141] libmachine: (addons-489802) DBG | <name>mk-addons-489802</name>
I0920 16:44:18.955053 16686 main.go:141] libmachine: (addons-489802) DBG | <dns enable='no'/>
I0920 16:44:18.955078 16686 main.go:141] libmachine: (addons-489802) DBG |
I0920 16:44:18.955099 16686 main.go:141] libmachine: (addons-489802) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0920 16:44:18.955108 16686 main.go:141] libmachine: (addons-489802) DBG | <dhcp>
I0920 16:44:18.955115 16686 main.go:141] libmachine: (addons-489802) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0920 16:44:18.955126 16686 main.go:141] libmachine: (addons-489802) DBG | </dhcp>
I0920 16:44:18.955132 16686 main.go:141] libmachine: (addons-489802) DBG | </ip>
I0920 16:44:18.955142 16686 main.go:141] libmachine: (addons-489802) DBG |
I0920 16:44:18.955152 16686 main.go:141] libmachine: (addons-489802) DBG | </network>
I0920 16:44:18.955180 16686 main.go:141] libmachine: (addons-489802) DBG |
I0920 16:44:18.961544 16686 main.go:141] libmachine: (addons-489802) DBG | trying to create private KVM network mk-addons-489802 192.168.39.0/24...
I0920 16:44:19.029008 16686 main.go:141] libmachine: (addons-489802) Setting up store path in /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
I0920 16:44:19.029031 16686 main.go:141] libmachine: (addons-489802) DBG | private KVM network mk-addons-489802 192.168.39.0/24 created
I0920 16:44:19.029050 16686 main.go:141] libmachine: (addons-489802) Building disk image from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
I0920 16:44:19.029076 16686 main.go:141] libmachine: (addons-489802) Downloading /home/jenkins/minikube-integration/19672-8777/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-8777/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
I0920 16:44:19.029097 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.028953 16708 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-8777/.minikube
I0920 16:44:19.344578 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.344398 16708 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa...
I0920 16:44:19.462008 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.461879 16708 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk...
I0920 16:44:19.462055 16686 main.go:141] libmachine: (addons-489802) DBG | Writing magic tar header
I0920 16:44:19.462065 16686 main.go:141] libmachine: (addons-489802) DBG | Writing SSH key tar header
I0920 16:44:19.462072 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:19.462027 16708 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 ...
I0920 16:44:19.462210 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802
I0920 16:44:19.462252 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube/machines
I0920 16:44:19.462263 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802 (perms=drwx------)
I0920 16:44:19.462287 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube/machines (perms=drwxr-xr-x)
I0920 16:44:19.462302 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777/.minikube
I0920 16:44:19.462312 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777/.minikube (perms=drwxr-xr-x)
I0920 16:44:19.462324 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-8777
I0920 16:44:19.462340 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0920 16:44:19.462350 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home/jenkins
I0920 16:44:19.462361 16686 main.go:141] libmachine: (addons-489802) DBG | Checking permissions on dir: /home
I0920 16:44:19.462374 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration/19672-8777 (perms=drwxrwxr-x)
I0920 16:44:19.462383 16686 main.go:141] libmachine: (addons-489802) DBG | Skipping /home - not owner
I0920 16:44:19.462409 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0920 16:44:19.462428 16686 main.go:141] libmachine: (addons-489802) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0920 16:44:19.462441 16686 main.go:141] libmachine: (addons-489802) Creating domain...
I0920 16:44:19.463291 16686 main.go:141] libmachine: (addons-489802) define libvirt domain using xml:
I0920 16:44:19.463308 16686 main.go:141] libmachine: (addons-489802) <domain type='kvm'>
I0920 16:44:19.463315 16686 main.go:141] libmachine: (addons-489802) <name>addons-489802</name>
I0920 16:44:19.463321 16686 main.go:141] libmachine: (addons-489802) <memory unit='MiB'>4000</memory>
I0920 16:44:19.463328 16686 main.go:141] libmachine: (addons-489802) <vcpu>2</vcpu>
I0920 16:44:19.463335 16686 main.go:141] libmachine: (addons-489802) <features>
I0920 16:44:19.463346 16686 main.go:141] libmachine: (addons-489802) <acpi/>
I0920 16:44:19.463360 16686 main.go:141] libmachine: (addons-489802) <apic/>
I0920 16:44:19.463368 16686 main.go:141] libmachine: (addons-489802) <pae/>
I0920 16:44:19.463375 16686 main.go:141] libmachine: (addons-489802)
I0920 16:44:19.463386 16686 main.go:141] libmachine: (addons-489802) </features>
I0920 16:44:19.463393 16686 main.go:141] libmachine: (addons-489802) <cpu mode='host-passthrough'>
I0920 16:44:19.463402 16686 main.go:141] libmachine: (addons-489802)
I0920 16:44:19.463408 16686 main.go:141] libmachine: (addons-489802) </cpu>
I0920 16:44:19.463415 16686 main.go:141] libmachine: (addons-489802) <os>
I0920 16:44:19.463424 16686 main.go:141] libmachine: (addons-489802) <type>hvm</type>
I0920 16:44:19.463435 16686 main.go:141] libmachine: (addons-489802) <boot dev='cdrom'/>
I0920 16:44:19.463445 16686 main.go:141] libmachine: (addons-489802) <boot dev='hd'/>
I0920 16:44:19.463472 16686 main.go:141] libmachine: (addons-489802) <bootmenu enable='no'/>
I0920 16:44:19.463497 16686 main.go:141] libmachine: (addons-489802) </os>
I0920 16:44:19.463520 16686 main.go:141] libmachine: (addons-489802) <devices>
I0920 16:44:19.463534 16686 main.go:141] libmachine: (addons-489802) <disk type='file' device='cdrom'>
I0920 16:44:19.463547 16686 main.go:141] libmachine: (addons-489802) <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/boot2docker.iso'/>
I0920 16:44:19.463558 16686 main.go:141] libmachine: (addons-489802) <target dev='hdc' bus='scsi'/>
I0920 16:44:19.463570 16686 main.go:141] libmachine: (addons-489802) <readonly/>
I0920 16:44:19.463577 16686 main.go:141] libmachine: (addons-489802) </disk>
I0920 16:44:19.463584 16686 main.go:141] libmachine: (addons-489802) <disk type='file' device='disk'>
I0920 16:44:19.463592 16686 main.go:141] libmachine: (addons-489802) <driver name='qemu' type='raw' cache='default' io='threads' />
I0920 16:44:19.463600 16686 main.go:141] libmachine: (addons-489802) <source file='/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/addons-489802.rawdisk'/>
I0920 16:44:19.463608 16686 main.go:141] libmachine: (addons-489802) <target dev='hda' bus='virtio'/>
I0920 16:44:19.463614 16686 main.go:141] libmachine: (addons-489802) </disk>
I0920 16:44:19.463623 16686 main.go:141] libmachine: (addons-489802) <interface type='network'>
I0920 16:44:19.463633 16686 main.go:141] libmachine: (addons-489802) <source network='mk-addons-489802'/>
I0920 16:44:19.463643 16686 main.go:141] libmachine: (addons-489802) <model type='virtio'/>
I0920 16:44:19.463651 16686 main.go:141] libmachine: (addons-489802) </interface>
I0920 16:44:19.463660 16686 main.go:141] libmachine: (addons-489802) <interface type='network'>
I0920 16:44:19.463672 16686 main.go:141] libmachine: (addons-489802) <source network='default'/>
I0920 16:44:19.463681 16686 main.go:141] libmachine: (addons-489802) <model type='virtio'/>
I0920 16:44:19.463703 16686 main.go:141] libmachine: (addons-489802) </interface>
I0920 16:44:19.463722 16686 main.go:141] libmachine: (addons-489802) <serial type='pty'>
I0920 16:44:19.463732 16686 main.go:141] libmachine: (addons-489802) <target port='0'/>
I0920 16:44:19.463738 16686 main.go:141] libmachine: (addons-489802) </serial>
I0920 16:44:19.463745 16686 main.go:141] libmachine: (addons-489802) <console type='pty'>
I0920 16:44:19.463755 16686 main.go:141] libmachine: (addons-489802) <target type='serial' port='0'/>
I0920 16:44:19.463762 16686 main.go:141] libmachine: (addons-489802) </console>
I0920 16:44:19.463767 16686 main.go:141] libmachine: (addons-489802) <rng model='virtio'>
I0920 16:44:19.463776 16686 main.go:141] libmachine: (addons-489802) <backend model='random'>/dev/random</backend>
I0920 16:44:19.463784 16686 main.go:141] libmachine: (addons-489802) </rng>
I0920 16:44:19.463793 16686 main.go:141] libmachine: (addons-489802)
I0920 16:44:19.463807 16686 main.go:141] libmachine: (addons-489802)
I0920 16:44:19.463822 16686 main.go:141] libmachine: (addons-489802) </devices>
I0920 16:44:19.463837 16686 main.go:141] libmachine: (addons-489802) </domain>
I0920 16:44:19.463852 16686 main.go:141] libmachine: (addons-489802)
I0920 16:44:19.470320 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:86:10:bf in network default
I0920 16:44:19.470900 16686 main.go:141] libmachine: (addons-489802) Ensuring networks are active...
I0920 16:44:19.470920 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:19.471767 16686 main.go:141] libmachine: (addons-489802) Ensuring network default is active
I0920 16:44:19.472031 16686 main.go:141] libmachine: (addons-489802) Ensuring network mk-addons-489802 is active
I0920 16:44:19.472810 16686 main.go:141] libmachine: (addons-489802) Getting domain xml...
I0920 16:44:19.473428 16686 main.go:141] libmachine: (addons-489802) Creating domain...
I0920 16:44:20.958983 16686 main.go:141] libmachine: (addons-489802) Waiting to get IP...
I0920 16:44:20.959942 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:20.960292 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:20.960332 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:20.960280 16708 retry.go:31] will retry after 218.466528ms: waiting for machine to come up
I0920 16:44:21.180891 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:21.181202 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:21.181228 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.181159 16708 retry.go:31] will retry after 269.124789ms: waiting for machine to come up
I0920 16:44:21.451562 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:21.451985 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:21.452021 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.451946 16708 retry.go:31] will retry after 418.879425ms: waiting for machine to come up
I0920 16:44:21.872595 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:21.873035 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:21.873056 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:21.873002 16708 retry.go:31] will retry after 379.463169ms: waiting for machine to come up
I0920 16:44:22.254754 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:22.255179 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:22.255208 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.255151 16708 retry.go:31] will retry after 621.089592ms: waiting for machine to come up
I0920 16:44:22.877890 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:22.878236 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:22.878254 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:22.878215 16708 retry.go:31] will retry after 896.419124ms: waiting for machine to come up
I0920 16:44:23.776119 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:23.776531 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:23.776580 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:23.776503 16708 retry.go:31] will retry after 792.329452ms: waiting for machine to come up
I0920 16:44:24.570579 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:24.571007 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:24.571032 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:24.570964 16708 retry.go:31] will retry after 1.123730634s: waiting for machine to come up
I0920 16:44:25.695981 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:25.696433 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:25.696455 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:25.696382 16708 retry.go:31] will retry after 1.437323391s: waiting for machine to come up
I0920 16:44:27.136109 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:27.136681 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:27.136706 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:27.136631 16708 retry.go:31] will retry after 2.286987635s: waiting for machine to come up
I0920 16:44:29.425015 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:29.425554 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:29.425597 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:29.425518 16708 retry.go:31] will retry after 1.976852311s: waiting for machine to come up
I0920 16:44:31.404712 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:31.405218 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:31.405240 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:31.405170 16708 retry.go:31] will retry after 3.060545694s: waiting for machine to come up
I0920 16:44:34.467106 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:34.467532 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:34.467559 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:34.467474 16708 retry.go:31] will retry after 3.246517198s: waiting for machine to come up
I0920 16:44:37.717806 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:37.718239 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find current IP address of domain addons-489802 in network mk-addons-489802
I0920 16:44:37.718274 16686 main.go:141] libmachine: (addons-489802) DBG | I0920 16:44:37.718168 16708 retry.go:31] will retry after 4.118490306s: waiting for machine to come up
I0920 16:44:41.841226 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:41.841726 16686 main.go:141] libmachine: (addons-489802) Found IP for machine: 192.168.39.89
I0920 16:44:41.841743 16686 main.go:141] libmachine: (addons-489802) Reserving static IP address...
I0920 16:44:41.841755 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has current primary IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:41.842160 16686 main.go:141] libmachine: (addons-489802) DBG | unable to find host DHCP lease matching {name: "addons-489802", mac: "52:54:00:bf:85:db", ip: "192.168.39.89"} in network mk-addons-489802
I0920 16:44:41.913230 16686 main.go:141] libmachine: (addons-489802) Reserved static IP address: 192.168.39.89
I0920 16:44:41.913257 16686 main.go:141] libmachine: (addons-489802) Waiting for SSH to be available...
I0920 16:44:41.913265 16686 main.go:141] libmachine: (addons-489802) DBG | Getting to WaitForSSH function...
I0920 16:44:41.915767 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:41.916236 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:85:db}
I0920 16:44:41.916267 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:41.916422 16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH client type: external
I0920 16:44:41.916446 16686 main.go:141] libmachine: (addons-489802) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa (-rw-------)
I0920 16:44:41.916467 16686 main.go:141] libmachine: (addons-489802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa -p 22] /usr/bin/ssh <nil>}
I0920 16:44:41.916475 16686 main.go:141] libmachine: (addons-489802) DBG | About to run SSH command:
I0920 16:44:41.916485 16686 main.go:141] libmachine: (addons-489802) DBG | exit 0
I0920 16:44:42.045938 16686 main.go:141] libmachine: (addons-489802) DBG | SSH cmd err, output: <nil>:
I0920 16:44:42.046220 16686 main.go:141] libmachine: (addons-489802) KVM machine creation complete!
I0920 16:44:42.046564 16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
I0920 16:44:42.047127 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:42.047334 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:42.047475 16686 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0920 16:44:42.047490 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:44:42.049083 16686 main.go:141] libmachine: Detecting operating system of created instance...
I0920 16:44:42.049109 16686 main.go:141] libmachine: Waiting for SSH to be available...
I0920 16:44:42.049116 16686 main.go:141] libmachine: Getting to WaitForSSH function...
I0920 16:44:42.049122 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.051309 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.051675 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.051731 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.051767 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.051947 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.052082 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.052201 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.052358 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:42.052546 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:42.052561 16686 main.go:141] libmachine: About to run SSH command:
exit 0
I0920 16:44:42.153288 16686 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 16:44:42.153332 16686 main.go:141] libmachine: Detecting the provisioner...
I0920 16:44:42.153344 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.156232 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.156583 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.156612 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.156760 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.156968 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.157119 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.157234 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.157410 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:42.157610 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:42.157626 16686 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0920 16:44:42.254380 16686 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0920 16:44:42.254438 16686 main.go:141] libmachine: found compatible host: buildroot
I0920 16:44:42.254444 16686 main.go:141] libmachine: Provisioning with buildroot...
I0920 16:44:42.254451 16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
I0920 16:44:42.254703 16686 buildroot.go:166] provisioning hostname "addons-489802"
I0920 16:44:42.254734 16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
I0920 16:44:42.254884 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.257868 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.258311 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.258354 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.258809 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.259005 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.259172 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.259323 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.259521 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:42.259670 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:42.259683 16686 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-489802 && echo "addons-489802" | sudo tee /etc/hostname
I0920 16:44:42.370953 16686 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489802
I0920 16:44:42.370980 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.373616 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.373970 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.374002 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.374153 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.374357 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.374531 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.374634 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.374808 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:42.374994 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:42.375012 16686 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-489802' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489802/g' /etc/hosts;
else
echo '127.0.1.1 addons-489802' | sudo tee -a /etc/hosts;
fi
fi
I0920 16:44:42.482921 16686 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 16:44:42.482949 16686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-8777/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-8777/.minikube}
I0920 16:44:42.482989 16686 buildroot.go:174] setting up certificates
I0920 16:44:42.482998 16686 provision.go:84] configureAuth start
I0920 16:44:42.483007 16686 main.go:141] libmachine: (addons-489802) Calling .GetMachineName
I0920 16:44:42.483254 16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
I0920 16:44:42.486082 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.486435 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.486458 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.486591 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.489005 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.489385 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.489412 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.489530 16686 provision.go:143] copyHostCerts
I0920 16:44:42.489599 16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/ca.pem (1082 bytes)
I0920 16:44:42.489774 16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/cert.pem (1123 bytes)
I0920 16:44:42.489920 16686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-8777/.minikube/key.pem (1675 bytes)
I0920 16:44:42.490019 16686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem org=jenkins.addons-489802 san=[127.0.0.1 192.168.39.89 addons-489802 localhost minikube]
I0920 16:44:42.556359 16686 provision.go:177] copyRemoteCerts
I0920 16:44:42.556423 16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0920 16:44:42.556446 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.559402 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.559884 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.559911 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.560233 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.560402 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.560524 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.560649 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:44:42.640095 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0920 16:44:42.664291 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0920 16:44:42.687271 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0920 16:44:42.709976 16686 provision.go:87] duration metric: took 226.963662ms to configureAuth
I0920 16:44:42.710011 16686 buildroot.go:189] setting minikube options for container-runtime
I0920 16:44:42.710210 16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 16:44:42.710288 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.713157 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.713576 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.713605 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.713861 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.714050 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.714198 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.714335 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.714575 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:42.714732 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:42.714746 16686 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0920 16:44:42.936196 16686 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0920 16:44:42.936230 16686 main.go:141] libmachine: Checking connection to Docker...
I0920 16:44:42.936255 16686 main.go:141] libmachine: (addons-489802) Calling .GetURL
I0920 16:44:42.937633 16686 main.go:141] libmachine: (addons-489802) DBG | Using libvirt version 6000000
I0920 16:44:42.940023 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.940360 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.940383 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.940608 16686 main.go:141] libmachine: Docker is up and running!
I0920 16:44:42.940623 16686 main.go:141] libmachine: Reticulating splines...
I0920 16:44:42.940629 16686 client.go:171] duration metric: took 24.676449957s to LocalClient.Create
I0920 16:44:42.940649 16686 start.go:167] duration metric: took 24.676502405s to libmachine.API.Create "addons-489802"
I0920 16:44:42.940665 16686 start.go:293] postStartSetup for "addons-489802" (driver="kvm2")
I0920 16:44:42.940675 16686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 16:44:42.940691 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:42.940982 16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 16:44:42.941005 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:42.943365 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.943725 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:42.943749 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:42.943950 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:42.944124 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:42.944283 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:42.944440 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:44:43.023999 16686 ssh_runner.go:195] Run: cat /etc/os-release
I0920 16:44:43.028231 16686 info.go:137] Remote host: Buildroot 2023.02.9
I0920 16:44:43.028271 16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/addons for local assets ...
I0920 16:44:43.028362 16686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8777/.minikube/files for local assets ...
I0920 16:44:43.028391 16686 start.go:296] duration metric: took 87.721087ms for postStartSetup
I0920 16:44:43.028430 16686 main.go:141] libmachine: (addons-489802) Calling .GetConfigRaw
I0920 16:44:43.029004 16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
I0920 16:44:43.032101 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.032392 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:43.032420 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.032651 16686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/config.json ...
I0920 16:44:43.032872 16686 start.go:128] duration metric: took 24.786580765s to createHost
I0920 16:44:43.032897 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:43.035034 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.035343 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:43.035377 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.035500 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:43.035665 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:43.035848 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:43.035974 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:43.036134 16686 main.go:141] libmachine: Using SSH client type: native
I0920 16:44:43.036283 16686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.89 22 <nil> <nil>}
I0920 16:44:43.036293 16686 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0920 16:44:43.134258 16686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726850683.106297733
I0920 16:44:43.134281 16686 fix.go:216] guest clock: 1726850683.106297733
I0920 16:44:43.134318 16686 fix.go:229] Guest: 2024-09-20 16:44:43.106297733 +0000 UTC Remote: 2024-09-20 16:44:43.032884764 +0000 UTC m=+24.887429631 (delta=73.412969ms)
I0920 16:44:43.134347 16686 fix.go:200] guest clock delta is within tolerance: 73.412969ms
I0920 16:44:43.134354 16686 start.go:83] releasing machines lock for "addons-489802", held for 24.88813735s
I0920 16:44:43.134375 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:43.134602 16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
I0920 16:44:43.137503 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.137857 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:43.137885 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.138022 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:43.138471 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:43.138655 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:44:43.138740 16686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0920 16:44:43.138784 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:43.138890 16686 ssh_runner.go:195] Run: cat /version.json
I0920 16:44:43.138911 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:44:43.141496 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.141700 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.141814 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:43.141848 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.141984 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:43.142122 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:43.142207 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:43.142233 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:43.142240 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:43.142382 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:44:43.142400 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:44:43.142527 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:44:43.142639 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:44:43.142738 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:44:43.214377 16686 ssh_runner.go:195] Run: systemctl --version
I0920 16:44:43.255061 16686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0920 16:44:43.407471 16686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0920 16:44:43.413920 16686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0920 16:44:43.413984 16686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 16:44:43.430049 16686 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0920 16:44:43.430083 16686 start.go:495] detecting cgroup driver to use...
I0920 16:44:43.430165 16686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0920 16:44:43.445755 16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0920 16:44:43.460072 16686 docker.go:217] disabling cri-docker service (if available) ...
I0920 16:44:43.460130 16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0920 16:44:43.473445 16686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0920 16:44:43.486406 16686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0920 16:44:43.599287 16686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0920 16:44:43.771188 16686 docker.go:233] disabling docker service ...
I0920 16:44:43.771285 16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0920 16:44:43.786254 16686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0920 16:44:43.799345 16686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0920 16:44:43.929040 16686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0920 16:44:44.054620 16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0920 16:44:44.068879 16686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0920 16:44:44.087412 16686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0920 16:44:44.087482 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.098030 16686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0920 16:44:44.098093 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.108462 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.119209 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.130359 16686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 16:44:44.141802 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.152585 16686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.169299 16686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 16:44:44.179293 16686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 16:44:44.188257 16686 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0920 16:44:44.188326 16686 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0920 16:44:44.200400 16686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 16:44:44.210617 16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:44.322851 16686 ssh_runner.go:195] Run: sudo systemctl restart crio
I0920 16:44:44.414303 16686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0920 16:44:44.414398 16686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0920 16:44:44.418774 16686 start.go:563] Will wait 60s for crictl version
I0920 16:44:44.418851 16686 ssh_runner.go:195] Run: which crictl
I0920 16:44:44.422352 16686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0920 16:44:44.464229 16686 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0920 16:44:44.464345 16686 ssh_runner.go:195] Run: crio --version
I0920 16:44:44.492112 16686 ssh_runner.go:195] Run: crio --version
I0920 16:44:44.519927 16686 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
I0920 16:44:44.520939 16686 main.go:141] libmachine: (addons-489802) Calling .GetIP
I0920 16:44:44.523216 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:44.523500 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:44:44.523521 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:44:44.523769 16686 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0920 16:44:44.527526 16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 16:44:44.539346 16686 kubeadm.go:883] updating cluster {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 16:44:44.539450 16686 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 16:44:44.539491 16686 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 16:44:44.570607 16686 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
I0920 16:44:44.570672 16686 ssh_runner.go:195] Run: which lz4
I0920 16:44:44.574305 16686 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0920 16:44:44.578003 16686 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0920 16:44:44.578036 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
I0920 16:44:45.832824 16686 crio.go:462] duration metric: took 1.258544501s to copy over tarball
I0920 16:44:45.832907 16686 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0920 16:44:49.851668 16686 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.018714604s)
I0920 16:44:49.851726 16686 crio.go:469] duration metric: took 4.01886728s to extract the tarball
I0920 16:44:49.851737 16686 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0920 16:44:49.896630 16686 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 16:44:49.944783 16686 crio.go:514] all images are preloaded for cri-o runtime.
I0920 16:44:49.944818 16686 cache_images.go:84] Images are preloaded, skipping loading
I0920 16:44:49.944827 16686 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.31.1 crio true true} ...
I0920 16:44:49.944968 16686 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-489802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0920 16:44:49.945079 16686 ssh_runner.go:195] Run: crio config
I0920 16:44:50.001938 16686 cni.go:84] Creating CNI manager for ""
I0920 16:44:50.001967 16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 16:44:50.001981 16686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 16:44:50.002006 16686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489802 NodeName:addons-489802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 16:44:50.002170 16686 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.89
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-489802"
kubeletExtraArgs:
node-ip: 192.168.39.89
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 16:44:50.002231 16686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 16:44:50.013339 16686 binaries.go:44] Found k8s binaries, skipping transfer
I0920 16:44:50.013411 16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 16:44:50.024767 16686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0920 16:44:50.045363 16686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 16:44:50.062898 16686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
I0920 16:44:50.080572 16686 ssh_runner.go:195] Run: grep 192.168.39.89 control-plane.minikube.internal$ /etc/hosts
I0920 16:44:50.085773 16686 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 16:44:50.098757 16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:44:50.240556 16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 16:44:50.258141 16686 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802 for IP: 192.168.39.89
I0920 16:44:50.258209 16686 certs.go:194] generating shared ca certs ...
I0920 16:44:50.258255 16686 certs.go:226] acquiring lock for ca certs: {Name:mkc7ef6c737c6bdc3fdd9dcff8f57029c020d8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.258438 16686 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key
I0920 16:44:50.381564 16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt ...
I0920 16:44:50.381596 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt: {Name:mkba49b4d048d5af44df48f4edd690a694a33473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.381797 16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key ...
I0920 16:44:50.381808 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key: {Name:mk653576ff784ce50de2dfa9e3a0facde1d60271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.381907 16686 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key
I0920 16:44:50.546530 16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt ...
I0920 16:44:50.546555 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt: {Name:mk67c6a6b77428ba0cdac9b9e34d49fcf308bb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.546726 16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key ...
I0920 16:44:50.546738 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key: {Name:mkd7ae4f2d01ceba146c4dc9b43c4a1a5ab41e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.546824 16686 certs.go:256] generating profile certs ...
I0920 16:44:50.546886 16686 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key
I0920 16:44:50.546900 16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt with IP's: []
I0920 16:44:50.626758 16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt ...
I0920 16:44:50.626785 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.crt: {Name:mkc5f095f711647000f5605c19ca0db353359e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.626972 16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key ...
I0920 16:44:50.626986 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/client.key: {Name:mk3f0c684e304c5dc541f54b7034757bf95d7fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.627082 16686 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc
I0920 16:44:50.627100 16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
I0920 16:44:50.846521 16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc ...
I0920 16:44:50.846553 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc: {Name:mkb99a44e1af5a4a578b6ff7445cbfc9f6d1c4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.846716 16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc ...
I0920 16:44:50.846729 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc: {Name:mk1ce5fd024a94836fd45952b6c3038de9bbeaae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.846799 16686 certs.go:381] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt
I0920 16:44:50.846874 16686 certs.go:385] copying /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key.1bac25cc -> /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key
I0920 16:44:50.846919 16686 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key
I0920 16:44:50.846934 16686 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt with IP's: []
I0920 16:44:51.074511 16686 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt ...
I0920 16:44:51.074548 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt: {Name:mk593c697632b0437e75154f622f66ff162758f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:51.074697 16686 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key ...
I0920 16:44:51.074708 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key: {Name:mkd7afdfda0e263fcdc4ad0882491ad3726f4657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:51.074875 16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca-key.pem (1675 bytes)
I0920 16:44:51.074907 16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/ca.pem (1082 bytes)
I0920 16:44:51.074929 16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/cert.pem (1123 bytes)
I0920 16:44:51.074950 16686 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8777/.minikube/certs/key.pem (1675 bytes)
I0920 16:44:51.075572 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 16:44:51.104195 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0920 16:44:51.128646 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 16:44:51.153291 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0920 16:44:51.177482 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0920 16:44:51.202143 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0920 16:44:51.226168 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 16:44:51.251069 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/profiles/addons-489802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0920 16:44:51.274951 16686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-8777/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 16:44:51.298272 16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 16:44:51.314508 16686 ssh_runner.go:195] Run: openssl version
I0920 16:44:51.320418 16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 16:44:51.331616 16686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:51.336211 16686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:51.336270 16686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:51.341681 16686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 16:44:51.351994 16686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 16:44:51.356403 16686 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 16:44:51.356470 16686 kubeadm.go:392] StartCluster: {Name:addons-489802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-489802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:51.356584 16686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0920 16:44:51.356645 16686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0920 16:44:51.396773 16686 cri.go:89] found id: ""
I0920 16:44:51.396839 16686 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 16:44:51.407827 16686 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 16:44:51.417398 16686 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 16:44:51.426423 16686 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 16:44:51.426443 16686 kubeadm.go:157] found existing configuration files:
I0920 16:44:51.426481 16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 16:44:51.435274 16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 16:44:51.435338 16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 16:44:51.444427 16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 16:44:51.453046 16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 16:44:51.453111 16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 16:44:51.462277 16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 16:44:51.470882 16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 16:44:51.470938 16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 16:44:51.480053 16686 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 16:44:51.488382 16686 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 16:44:51.488450 16686 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 16:44:51.497406 16686 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0920 16:44:51.541221 16686 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 16:44:51.541351 16686 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 16:44:51.633000 16686 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 16:44:51.633106 16686 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 16:44:51.633217 16686 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 16:44:51.641465 16686 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 16:44:51.643561 16686 out.go:235] - Generating certificates and keys ...
I0920 16:44:51.643637 16686 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 16:44:51.643707 16686 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 16:44:51.974976 16686 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 16:44:52.212429 16686 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 16:44:52.725412 16686 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 16:44:52.824449 16686 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 16:44:52.884139 16686 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 16:44:52.884436 16686 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
I0920 16:44:53.064017 16686 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 16:44:53.064225 16686 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-489802 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
I0920 16:44:53.110684 16686 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 16:44:53.439405 16686 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 16:44:53.523372 16686 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 16:44:53.523450 16686 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 16:44:53.894835 16686 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 16:44:54.063405 16686 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 16:44:54.134012 16686 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 16:44:54.252802 16686 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 16:44:54.496063 16686 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 16:44:54.498352 16686 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 16:44:54.501105 16686 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 16:44:54.502882 16686 out.go:235] - Booting up control plane ...
I0920 16:44:54.503004 16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 16:44:54.503113 16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 16:44:54.503192 16686 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 16:44:54.517820 16686 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 16:44:54.525307 16686 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 16:44:54.525359 16686 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 16:44:54.642832 16686 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 16:44:54.642977 16686 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 16:44:55.143793 16686 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346631ms
I0920 16:44:55.143884 16686 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 16:45:00.142510 16686 kubeadm.go:310] [api-check] The API server is healthy after 5.001658723s
I0920 16:45:00.161952 16686 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 16:45:00.199831 16686 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 16:45:00.237142 16686 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 16:45:00.237431 16686 kubeadm.go:310] [mark-control-plane] Marking the node addons-489802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 16:45:00.267465 16686 kubeadm.go:310] [bootstrap-token] Using token: pxuown.8491ndv1zucibr8t
I0920 16:45:00.269321 16686 out.go:235] - Configuring RBAC rules ...
I0920 16:45:00.269445 16686 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 16:45:00.277244 16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 16:45:00.297062 16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 16:45:00.303392 16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 16:45:00.310726 16686 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 16:45:00.317990 16686 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 16:45:00.550067 16686 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 16:45:00.983547 16686 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 16:45:01.549916 16686 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 16:45:01.549943 16686 kubeadm.go:310]
I0920 16:45:01.550082 16686 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 16:45:01.550165 16686 kubeadm.go:310]
I0920 16:45:01.550391 16686 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 16:45:01.550403 16686 kubeadm.go:310]
I0920 16:45:01.550435 16686 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 16:45:01.550520 16686 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 16:45:01.550590 16686 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 16:45:01.550601 16686 kubeadm.go:310]
I0920 16:45:01.550668 16686 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 16:45:01.550680 16686 kubeadm.go:310]
I0920 16:45:01.550751 16686 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 16:45:01.550761 16686 kubeadm.go:310]
I0920 16:45:01.550847 16686 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 16:45:01.550942 16686 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 16:45:01.551031 16686 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 16:45:01.551040 16686 kubeadm.go:310]
I0920 16:45:01.551130 16686 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 16:45:01.551241 16686 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 16:45:01.551252 16686 kubeadm.go:310]
I0920 16:45:01.551332 16686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
I0920 16:45:01.551422 16686 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d \
I0920 16:45:01.551443 16686 kubeadm.go:310] --control-plane
I0920 16:45:01.551456 16686 kubeadm.go:310]
I0920 16:45:01.551575 16686 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 16:45:01.551586 16686 kubeadm.go:310]
I0920 16:45:01.551676 16686 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pxuown.8491ndv1zucibr8t \
I0920 16:45:01.551784 16686 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:736f3fb521855813decb4a7214f2b8beff5f81c3971b60decfa20a8807626a2d
I0920 16:45:01.552616 16686 kubeadm.go:310] W0920 16:44:51.520638 818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:45:01.553045 16686 kubeadm.go:310] W0920 16:44:51.522103 818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:45:01.553171 16686 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 16:45:01.553193 16686 cni.go:84] Creating CNI manager for ""
I0920 16:45:01.553204 16686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 16:45:01.554912 16686 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 16:45:01.556375 16686 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0920 16:45:01.567185 16686 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 16:45:01.590373 16686 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 16:45:01.590503 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:01.590518 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-489802 minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-489802 minikube.k8s.io/primary=true
I0920 16:45:01.611693 16686 ops.go:34] apiserver oom_adj: -16
I0920 16:45:01.740445 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:02.241564 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:02.740509 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:03.241160 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:03.740876 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:04.241125 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:04.740796 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:05.241433 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:05.740524 16686 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:05.862361 16686 kubeadm.go:1113] duration metric: took 4.271922428s to wait for elevateKubeSystemPrivileges
I0920 16:45:05.862397 16686 kubeadm.go:394] duration metric: took 14.505940675s to StartCluster
I0920 16:45:05.862414 16686 settings.go:142] acquiring lock: {Name:mk2a13c58cdc0faf8cddca5d6716175d45db9bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:45:05.862558 16686 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19672-8777/kubeconfig
I0920 16:45:05.862903 16686 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8777/kubeconfig: {Name:mkf32a4c736808e023459b2f0e40188618a38db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:45:05.863101 16686 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0920 16:45:05.863138 16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 16:45:05.863158 16686 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 16:45:05.863290 16686 addons.go:69] Setting yakd=true in profile "addons-489802"
I0920 16:45:05.863282 16686 addons.go:69] Setting default-storageclass=true in profile "addons-489802"
I0920 16:45:05.863308 16686 addons.go:234] Setting addon yakd=true in "addons-489802"
I0920 16:45:05.863317 16686 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489802"
I0920 16:45:05.863312 16686 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489802"
I0920 16:45:05.863314 16686 addons.go:69] Setting cloud-spanner=true in profile "addons-489802"
I0920 16:45:05.863340 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863341 16686 addons.go:234] Setting addon cloud-spanner=true in "addons-489802"
I0920 16:45:05.863342 16686 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489802"
I0920 16:45:05.863361 16686 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-489802"
I0920 16:45:05.863363 16686 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-489802"
I0920 16:45:05.863375 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863390 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863391 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863391 16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 16:45:05.863448 16686 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489802"
I0920 16:45:05.863461 16686 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489802"
I0920 16:45:05.863793 16686 addons.go:69] Setting gcp-auth=true in profile "addons-489802"
I0920 16:45:05.863800 16686 addons.go:69] Setting ingress-dns=true in profile "addons-489802"
I0920 16:45:05.863804 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.863808 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.863821 16686 addons.go:69] Setting ingress=true in profile "addons-489802"
I0920 16:45:05.863824 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.863831 16686 addons.go:69] Setting metrics-server=true in profile "addons-489802"
I0920 16:45:05.863821 16686 addons.go:69] Setting inspektor-gadget=true in profile "addons-489802"
I0920 16:45:05.863839 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.863843 16686 addons.go:234] Setting addon metrics-server=true in "addons-489802"
I0920 16:45:05.863845 16686 addons.go:69] Setting volcano=true in profile "addons-489802"
I0920 16:45:05.863812 16686 mustload.go:65] Loading cluster: addons-489802
I0920 16:45:05.863852 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.863856 16686 addons.go:234] Setting addon volcano=true in "addons-489802"
I0920 16:45:05.863865 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863881 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863918 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.863925 16686 addons.go:69] Setting registry=true in profile "addons-489802"
I0920 16:45:05.863943 16686 addons.go:234] Setting addon registry=true in "addons-489802"
I0920 16:45:05.863943 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.863955 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.863978 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.864003 16686 config.go:182] Loaded profile config "addons-489802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 16:45:05.864008 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.864067 16686 addons.go:69] Setting storage-provisioner=true in profile "addons-489802"
I0920 16:45:05.864077 16686 addons.go:234] Setting addon storage-provisioner=true in "addons-489802"
I0920 16:45:05.864162 16686 addons.go:69] Setting volumesnapshots=true in profile "addons-489802"
I0920 16:45:05.864180 16686 addons.go:234] Setting addon volumesnapshots=true in "addons-489802"
I0920 16:45:05.864214 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.864241 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.864270 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.863833 16686 addons.go:234] Setting addon ingress=true in "addons-489802"
I0920 16:45:05.864312 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.864337 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.863812 16686 addons.go:234] Setting addon ingress-dns=true in "addons-489802"
I0920 16:45:05.864407 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.863847 16686 addons.go:234] Setting addon inspektor-gadget=true in "addons-489802"
I0920 16:45:05.863810 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.864596 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.864641 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.864662 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.864741 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.864770 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.864799 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.864991 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.864993 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.865016 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.865021 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.865128 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.865158 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.865250 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.865287 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.865605 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.873149 16686 out.go:177] * Verifying Kubernetes components...
I0920 16:45:05.875354 16686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 16:45:05.886351 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.886408 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.886439 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.886493 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.886542 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
I0920 16:45:05.886778 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
I0920 16:45:05.886908 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
I0920 16:45:05.887721 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.887867 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.887935 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.888511 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.888539 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.888665 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.888682 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.889051 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.889074 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.889168 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
I0920 16:45:05.889340 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.889387 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.889430 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.889990 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.890030 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.890136 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.890165 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.894535 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.895113 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.895154 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.904311 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.904341 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.905034 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.905227 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.910612 16686 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-489802"
I0920 16:45:05.910663 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.911040 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.911095 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.911196 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
I0920 16:45:05.912127 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
I0920 16:45:05.912633 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.913296 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.913317 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.913620 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
I0920 16:45:05.913784 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
I0920 16:45:05.913785 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.914527 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.914569 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.914814 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.914815 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.915345 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.915366 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.915470 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.915488 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.916370 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.916574 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.916621 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.917159 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.917200 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.917629 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.918192 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.918213 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.918613 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.918669 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.919045 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.919074 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.922095 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.925413 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
I0920 16:45:05.926161 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.926895 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.926919 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.927445 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.928038 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.928083 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.930652 16686 addons.go:234] Setting addon default-storageclass=true in "addons-489802"
I0920 16:45:05.930702 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:05.931084 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.931143 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.932706 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
I0920 16:45:05.933363 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.934073 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.934093 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.934558 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.935171 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.935210 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.941706 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
I0920 16:45:05.942347 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.943149 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.943173 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.943717 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.949811 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
I0920 16:45:05.950710 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.950769 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.951083 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.951845 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.951868 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.952349 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.952538 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:05.953123 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
I0920 16:45:05.954739 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.955577 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
I0920 16:45:05.956118 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
I0920 16:45:05.956311 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.956877 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.956902 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.957263 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.957283 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.958119 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.958195 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
I0920 16:45:05.958880 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.958921 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.959186 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.959739 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.959761 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.959785 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.960399 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.960985 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.961025 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.961535 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.961729 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.961940 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.961958 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.962782 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.963365 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:05.963414 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:05.963800 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:05.966313 16686 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 16:45:05.967714 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
I0920 16:45:05.967733 16686 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 16:45:05.967750 16686 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 16:45:05.967775 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:05.971362 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
I0920 16:45:05.972858 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
I0920 16:45:05.974844 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:05.975487 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:05.975517 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:05.975763 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:05.975965 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:05.976140 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:05.976363 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:05.977671 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
I0920 16:45:05.978187 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.981448 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
I0920 16:45:05.981604 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
I0920 16:45:05.982424 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.982550 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.982830 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.982881 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.983467 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.983492 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.983551 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.983961 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.983979 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.984042 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.984224 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.984715 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
I0920 16:45:05.984871 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.984923 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.985197 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:05.986711 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:05.987367 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:05.987635 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:05.987654 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:05.987994 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:05.988156 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:05.988566 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
I0920 16:45:05.989594 16686 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 16:45:05.990395 16686 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 16:45:05.991212 16686 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:45:05.991233 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 16:45:05.991257 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:05.991416 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:05.992716 16686 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 16:45:05.992737 16686 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 16:45:05.992760 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:05.992873 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
I0920 16:45:05.993699 16686 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 16:45:05.995293 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 16:45:05.995314 16686 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 16:45:05.995337 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:05.995421 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:05.995474 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:05.995494 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:05.995520 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:05.995539 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
I0920 16:45:06.002124 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.002163 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.002180 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.002186 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.002226 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.002256 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.002186 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.002304 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.002330 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.002392 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:06.002441 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:06.002794 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.002895 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.003001 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.003084 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.003168 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.003348 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.003599 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.003651 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.003661 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.003693 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.003693 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.003708 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.003715 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.003952 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.003969 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.004102 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.004235 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.004248 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.004312 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.004332 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.004348 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.004535 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.004574 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.004738 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.004727 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.004793 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.005068 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.005104 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:06.005120 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.005134 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:06.005135 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.005145 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.006374 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.006382 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.006398 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.006377 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.007189 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.007202 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
I0920 16:45:06.007213 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.007251 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
I0920 16:45:06.007358 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.007582 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:06.007618 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:06.008305 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.009013 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.009036 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.009097 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.009454 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:06.009483 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:06.011482 16686 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0920 16:45:06.011667 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:06.011700 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:06.011718 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.011719 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:06.011730 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:06.011738 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:06.011780 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.012083 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:06.012119 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:06.012127 16686 main.go:141] libmachine: Making call to close connection to plugin binary
W0920 16:45:06.012215 16686 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0920 16:45:06.013040 16686 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 16:45:06.013057 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 16:45:06.013076 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.013854 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 16:45:06.013875 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.014222 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.014278 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.015566 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.015585 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.016191 16686 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 16:45:06.016298 16686 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0920 16:45:06.016476 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.016889 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.017494 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.018839 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.019261 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.019283 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.019485 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.019664 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.019716 16686 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 16:45:06.019816 16686 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:45:06.019996 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.020051 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.020211 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.020731 16686 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 16:45:06.021987 16686 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 16:45:06.022029 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 16:45:06.022093 16686 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:45:06.022300 16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:06.022755 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 16:45:06.022776 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.023143 16686 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 16:45:06.023160 16686 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 16:45:06.023177 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.024174 16686 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0920 16:45:06.024191 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0920 16:45:06.024275 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.024664 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 16:45:06.025980 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 16:45:06.027309 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.027785 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.027815 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.027929 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.028009 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.028181 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.028474 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.028495 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.028615 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.028701 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 16:45:06.028891 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.028889 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.028923 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.029196 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.029192 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.029222 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.029483 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.029709 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.029887 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.029906 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.030033 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.030190 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.031196 16686 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 16:45:06.032725 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 16:45:06.032746 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 16:45:06.032780 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.034644 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
I0920 16:45:06.035197 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
I0920 16:45:06.035340 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.036022 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.036041 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.036112 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.036407 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.036475 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.036695 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.036796 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.036813 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.037369 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.037379 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.037431 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
I0920 16:45:06.037435 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.037447 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.037568 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.037633 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
I0920 16:45:06.037767 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.037792 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.037889 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.037985 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.038291 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.038315 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.038531 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:06.038620 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.038675 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.038861 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.039491 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.039654 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:06.039669 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:06.040233 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:06.040465 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:06.040605 16686 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 16:45:06.040832 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.041303 16686 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:06.041318 16686 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 16:45:06.041334 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.041615 16686 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0920 16:45:06.042140 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:06.043269 16686 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 16:45:06.043289 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0920 16:45:06.043306 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.044349 16686 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 16:45:06.044617 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.044625 16686 out.go:177] - Using image docker.io/busybox:stable
I0920 16:45:06.045036 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.045057 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.045261 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.045420 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.045924 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.046045 16686 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:45:06.046062 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 16:45:06.046076 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.046233 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.046927 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.047431 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.047463 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.047597 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.047765 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.047891 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.048008 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.048154 16686 out.go:177] - Using image docker.io/registry:2.8.3
I0920 16:45:06.049631 16686 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 16:45:06.049649 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 16:45:06.049663 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.049676 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:06.050129 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.050156 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.050430 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.050586 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.050750 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.050868 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.052498 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.052871 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:06.052900 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:06.053033 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:06.053170 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:06.053326 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:06.053496 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:06.353051 16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 16:45:06.353074 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 16:45:06.375750 16686 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 16:45:06.375808 16686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 16:45:06.391326 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:45:06.493613 16686 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 16:45:06.493638 16686 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 16:45:06.505773 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0920 16:45:06.532977 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 16:45:06.533515 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 16:45:06.533534 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 16:45:06.540683 16686 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 16:45:06.540708 16686 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 16:45:06.543084 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:06.544984 16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 16:45:06.545000 16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 16:45:06.551458 16686 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 16:45:06.551479 16686 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 16:45:06.556172 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 16:45:06.557507 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:45:06.566682 16686 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:45:06.566703 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 16:45:06.627313 16686 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 16:45:06.627340 16686 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 16:45:06.640927 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:06.670548 16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 16:45:06.670574 16686 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 16:45:06.763522 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 16:45:06.763549 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 16:45:06.783481 16686 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 16:45:06.783521 16686 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 16:45:06.819177 16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 16:45:06.819204 16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 16:45:06.839272 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:45:06.896200 16686 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 16:45:06.896230 16686 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 16:45:06.910579 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 16:45:06.910614 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 16:45:06.930437 16686 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 16:45:06.930463 16686 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 16:45:06.940831 16686 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:45:06.940867 16686 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 16:45:07.047035 16686 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 16:45:07.047062 16686 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 16:45:07.215806 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:45:07.218901 16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 16:45:07.218932 16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 16:45:07.223882 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 16:45:07.223905 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 16:45:07.227082 16686 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:45:07.227103 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 16:45:07.256340 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 16:45:07.256375 16686 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 16:45:07.464044 16686 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 16:45:07.464078 16686 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 16:45:07.493814 16686 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 16:45:07.493851 16686 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 16:45:07.582458 16686 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:07.582479 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 16:45:07.603848 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:45:07.828047 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 16:45:07.828070 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 16:45:07.844298 16686 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 16:45:07.844335 16686 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 16:45:08.029971 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:08.174001 16686 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:45:08.174023 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 16:45:08.192445 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 16:45:08.192475 16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 16:45:08.510930 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:45:08.524911 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 16:45:08.524942 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 16:45:08.726846 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 16:45:08.726879 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 16:45:09.009410 16686 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:09.009447 16686 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 16:45:09.024627 16686 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.648835712s)
I0920 16:45:09.024679 16686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.648847664s)
I0920 16:45:09.024704 16686 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0920 16:45:09.024765 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.633411979s)
I0920 16:45:09.024811 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:09.024825 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:09.025119 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:09.025144 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:09.025153 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:09.025161 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:09.025404 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:09.025445 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:09.025920 16686 node_ready.go:35] waiting up to 6m0s for node "addons-489802" to be "Ready" ...
I0920 16:45:09.035518 16686 node_ready.go:49] node "addons-489802" has status "Ready":"True"
I0920 16:45:09.035609 16686 node_ready.go:38] duration metric: took 9.661904ms for node "addons-489802" to be "Ready" ...
I0920 16:45:09.035637 16686 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:45:09.051148 16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
I0920 16:45:09.322288 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:09.534546 16686 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-489802" context rescaled to 1 replicas
I0920 16:45:11.158586 16686 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:12.692545 16686 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:12.692574 16686 pod_ready.go:82] duration metric: took 3.641395186s for pod "coredns-7c65d6cfc9-nqbzq" in "kube-system" namespace to be "Ready" ...
I0920 16:45:12.692587 16686 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
I0920 16:45:12.993726 16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 16:45:12.993782 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:12.997095 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:12.997468 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:12.997509 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:12.997646 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:12.997868 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:12.998029 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:12.998260 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:13.539202 16686 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 16:45:13.682847 16686 addons.go:234] Setting addon gcp-auth=true in "addons-489802"
I0920 16:45:13.682906 16686 host.go:66] Checking if "addons-489802" exists ...
I0920 16:45:13.683199 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:13.683239 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:13.702441 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
I0920 16:45:13.702905 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:13.703420 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:13.703442 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:13.703814 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:13.704438 16686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 16:45:13.704485 16686 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 16:45:13.722380 16686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
I0920 16:45:13.723033 16686 main.go:141] libmachine: () Calling .GetVersion
I0920 16:45:13.723749 16686 main.go:141] libmachine: Using API Version 1
I0920 16:45:13.723776 16686 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 16:45:13.724178 16686 main.go:141] libmachine: () Calling .GetMachineName
I0920 16:45:13.724416 16686 main.go:141] libmachine: (addons-489802) Calling .GetState
I0920 16:45:13.726164 16686 main.go:141] libmachine: (addons-489802) Calling .DriverName
I0920 16:45:13.726406 16686 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 16:45:13.726432 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHHostname
I0920 16:45:13.729255 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:13.729760 16686 main.go:141] libmachine: (addons-489802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:85:db", ip: ""} in network mk-addons-489802: {Iface:virbr1 ExpiryTime:2024-09-20 17:44:35 +0000 UTC Type:0 Mac:52:54:00:bf:85:db Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-489802 Clientid:01:52:54:00:bf:85:db}
I0920 16:45:13.729791 16686 main.go:141] libmachine: (addons-489802) DBG | domain addons-489802 has defined IP address 192.168.39.89 and MAC address 52:54:00:bf:85:db in network mk-addons-489802
I0920 16:45:13.729945 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHPort
I0920 16:45:13.730109 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHKeyPath
I0920 16:45:13.730294 16686 main.go:141] libmachine: (addons-489802) Calling .GetSSHUsername
I0920 16:45:13.730440 16686 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-8777/.minikube/machines/addons-489802/id_rsa Username:docker}
I0920 16:45:13.776226 16686 pod_ready.go:98] pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.168.39.89}] PodIP:10.244.0.3 Po
dIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil AllocatedResourcesStatus:[]}
] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0920 16:45:13.776273 16686 pod_ready.go:82] duration metric: took 1.083676607s for pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace to be "Ready" ...
E0920 16:45:13.776285 16686 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-tm9vr" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:13 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 16:45:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.16
8.39.89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 16:45:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-09-20 16:45:10 +0000 UTC,FinishedAt:2024-09-20 16:45:11 +0000 UTC,ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://918bc5fe873828ba31e8b226c084835ff9648d49d56fd967df98c04026fcd9c4 Started:0xc00179695c AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a1e830} {Name:kube-api-access-l4wh8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a1e840}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0920 16:45:13.776297 16686 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.895071 16686 pod_ready.go:93] pod "etcd-addons-489802" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.895098 16686 pod_ready.go:82] duration metric: took 118.793361ms for pod "etcd-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.895111 16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.014764 16686 pod_ready.go:93] pod "kube-apiserver-addons-489802" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:14.014787 16686 pod_ready.go:82] duration metric: took 119.668585ms for pod "kube-apiserver-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.014841 16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.127671 16686 pod_ready.go:93] pod "kube-controller-manager-addons-489802" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:14.127694 16686 pod_ready.go:82] duration metric: took 112.838527ms for pod "kube-controller-manager-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.127705 16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.150341 16686 pod_ready.go:93] pod "kube-proxy-xr4bt" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:14.150367 16686 pod_ready.go:82] duration metric: took 22.655966ms for pod "kube-proxy-xr4bt" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.150376 16686 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.206202 16686 pod_ready.go:93] pod "kube-scheduler-addons-489802" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:14.206226 16686 pod_ready.go:82] duration metric: took 55.843139ms for pod "kube-scheduler-addons-489802" in "kube-system" namespace to be "Ready" ...
I0920 16:45:14.206238 16686 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
I0920 16:45:15.135704 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.629885928s)
I0920 16:45:15.135777 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.135782 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.602774066s)
I0920 16:45:15.135815 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.135832 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.135837 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.592733845s)
I0920 16:45:15.135860 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.135874 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.135791 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.135976 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.579777747s)
I0920 16:45:15.136071 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136137 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.136165 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.136165 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.136176 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.136187 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.136191 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136195 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136195 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.136202 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136241 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136199 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136269 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.578731979s)
I0920 16:45:15.136290 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.136196 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.136312 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.136322 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136299 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136332 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136345 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136388 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.297083849s)
I0920 16:45:15.136410 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136420 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136467 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.136492 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.136499 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.136506 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136513 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136540 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.920700025s)
I0920 16:45:15.136560 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136569 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136342 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.495383696s)
I0920 16:45:15.136654 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136666 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136665 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.532769315s)
I0920 16:45:15.136718 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136726 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.136765 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.106759371s)
I0920 16:45:15.136781 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
W0920 16:45:15.136792 16686 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:15.136807 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.136815 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.136815 16686 retry.go:31] will retry after 374.579066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:15.136939 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.625889401s)
I0920 16:45:15.136963 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.136976 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.137039 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137050 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137071 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.137102 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.137131 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.137137 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.137152 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.137158 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137108 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.137170 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.137178 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.137186 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.137875 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137908 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.137915 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.137922 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.137929 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.137975 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137994 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.137999 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.138006 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.138013 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.138047 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.138061 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.138078 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.138084 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.138093 16686 addons.go:475] Verifying addon registry=true in "addons-489802"
I0920 16:45:15.138895 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.138916 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.138927 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.138936 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.139035 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.139050 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.137073 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.139271 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.137144 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.139348 16686 addons.go:475] Verifying addon ingress=true in "addons-489802"
I0920 16:45:15.139477 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.137089 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.139526 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.139550 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.139564 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.139719 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.139735 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.139509 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.139873 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.139884 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.139894 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.140278 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.140316 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.140328 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.141359 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.141378 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.141387 16686 addons.go:475] Verifying addon metrics-server=true in "addons-489802"
I0920 16:45:15.141742 16686 out.go:177] * Verifying ingress addon...
I0920 16:45:15.141861 16686 out.go:177] * Verifying registry addon...
I0920 16:45:15.142395 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.142416 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.142438 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:15.144272 16686 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-489802 service yakd-dashboard -n yakd-dashboard
I0920 16:45:15.144625 16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 16:45:15.144652 16686 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0920 16:45:15.182676 16686 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 16:45:15.182707 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:15.183762 16686 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0920 16:45:15.183790 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:15.473454 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.473474 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.473959 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.473976 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:15.479442 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:15.479466 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:15.479704 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:15.479721 16686 main.go:141] libmachine: Making call to close connection to plugin binary
W0920 16:45:15.479879 16686 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0920 16:45:15.512325 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:15.658712 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:15.659607 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:16.155622 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.160001 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:16.241480 16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:16.517442 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.195100107s)
I0920 16:45:16.517489 16686 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.791061379s)
I0920 16:45:16.517497 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:16.517513 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:16.517795 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:16.517795 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:16.517817 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:16.517843 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:16.517851 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:16.518062 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:16.518079 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:16.518089 16686 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-489802"
I0920 16:45:16.519716 16686 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 16:45:16.519723 16686 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:45:16.521078 16686 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 16:45:16.521713 16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:45:16.522238 16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 16:45:16.522258 16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 16:45:16.561413 16686 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:45:16.561441 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:16.652853 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.654932 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:16.670493 16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 16:45:16.670518 16686 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 16:45:16.788959 16686 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:16.788986 16686 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 16:45:16.869081 16686 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:17.027599 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.156633 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:17.157163 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:17.527462 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.650521 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:17.650643 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:17.734897 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.222504857s)
I0920 16:45:17.734961 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:17.734978 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:17.735373 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:17.735395 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:17.735414 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:17.735423 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:17.735676 16686 main.go:141] libmachine: (addons-489802) DBG | Closing plugin on server side
I0920 16:45:17.735715 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:17.735732 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:18.039389 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.191248 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.192032 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:18.226929 16686 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.357782077s)
I0920 16:45:18.227006 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:18.227027 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:18.227352 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:18.227371 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:18.227380 16686 main.go:141] libmachine: Making call to close driver server
I0920 16:45:18.227388 16686 main.go:141] libmachine: (addons-489802) Calling .Close
I0920 16:45:18.227596 16686 main.go:141] libmachine: Successfully made call to close driver server
I0920 16:45:18.227608 16686 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 16:45:18.229117 16686 addons.go:475] Verifying addon gcp-auth=true in "addons-489802"
I0920 16:45:18.230928 16686 out.go:177] * Verifying gcp-auth addon...
I0920 16:45:18.233132 16686 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 16:45:18.302814 16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:18.303833 16686 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 16:45:18.303849 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:18.526206 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.650162 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:18.650906 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.737130 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:19.027359 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.151083 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:19.152167 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:19.237097 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:19.530489 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.651552 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:19.651799 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:19.737916 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:20.027552 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.150028 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.150617 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:20.237634 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:20.527445 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.651604 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.652378 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:20.712902 16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:20.736944 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:21.029114 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.149408 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.150699 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:21.236999 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:21.527442 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.967907 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.968174 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:22.070927 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:22.072675 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.149613 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:22.150237 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.237824 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:22.531579 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.650997 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.651735 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:22.714124 16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:22.738003 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:23.036430 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:23.154161 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:23.155271 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:23.274914 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:23.528959 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:23.662172 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:23.665690 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:23.747609 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:24.028698 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.163651 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:24.164456 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:24.248826 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:24.526972 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.652716 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:24.653397 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:24.715653 16686 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:24.740107 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:25.028341 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.150991 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:25.153743 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:25.634814 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:25.635566 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.651776 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:25.652748 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:25.736431 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:26.032193 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.150517 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:26.150967 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:26.238433 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:26.527250 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.650016 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:26.650451 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:26.737952 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:27.027290 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.150220 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:27.150405 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:27.213074 16686 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:27.213099 16686 pod_ready.go:82] duration metric: took 13.006853784s for pod "nvidia-device-plugin-daemonset-54hhx" in "kube-system" namespace to be "Ready" ...
I0920 16:45:27.213106 16686 pod_ready.go:39] duration metric: took 18.177423912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:45:27.213122 16686 api_server.go:52] waiting for apiserver process to appear ...
I0920 16:45:27.213169 16686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:27.236400 16686 api_server.go:72] duration metric: took 21.373270823s to wait for apiserver process to appear ...
I0920 16:45:27.236426 16686 api_server.go:88] waiting for apiserver healthz status ...
I0920 16:45:27.236445 16686 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
I0920 16:45:27.239701 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:27.242110 16686 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
ok
I0920 16:45:27.243105 16686 api_server.go:141] control plane version: v1.31.1
I0920 16:45:27.243132 16686 api_server.go:131] duration metric: took 6.699495ms to wait for apiserver health ...
I0920 16:45:27.243142 16686 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 16:45:27.251414 16686 system_pods.go:59] 17 kube-system pods found
I0920 16:45:27.251443 16686 system_pods.go:61] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
I0920 16:45:27.251451 16686 system_pods.go:61] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:27.251458 16686 system_pods.go:61] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:27.251465 16686 system_pods.go:61] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:27.251469 16686 system_pods.go:61] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
I0920 16:45:27.251475 16686 system_pods.go:61] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
I0920 16:45:27.251481 16686 system_pods.go:61] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
I0920 16:45:27.251488 16686 system_pods.go:61] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0920 16:45:27.251495 16686 system_pods.go:61] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
I0920 16:45:27.251504 16686 system_pods.go:61] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
I0920 16:45:27.251512 16686 system_pods.go:61] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:27.251518 16686 system_pods.go:61] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
I0920 16:45:27.251526 16686 system_pods.go:61] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 16:45:27.251534 16686 system_pods.go:61] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 16:45:27.251542 16686 system_pods.go:61] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:27.251549 16686 system_pods.go:61] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:27.251553 16686 system_pods.go:61] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
I0920 16:45:27.251561 16686 system_pods.go:74] duration metric: took 8.412514ms to wait for pod list to return data ...
I0920 16:45:27.251568 16686 default_sa.go:34] waiting for default service account to be created ...
I0920 16:45:27.254735 16686 default_sa.go:45] found service account: "default"
I0920 16:45:27.254760 16686 default_sa.go:55] duration metric: took 3.185589ms for default service account to be created ...
I0920 16:45:27.254770 16686 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 16:45:27.261725 16686 system_pods.go:86] 17 kube-system pods found
I0920 16:45:27.261752 16686 system_pods.go:89] "coredns-7c65d6cfc9-nqbzq" [734f1782-975a-486b-adf3-32f60c376a9a] Running
I0920 16:45:27.261759 16686 system_pods.go:89] "csi-hostpath-attacher-0" [8fc733e6-4135-418b-a554-490bd25dabe7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:27.261766 16686 system_pods.go:89] "csi-hostpath-resizer-0" [85755d16-e8fa-4878-9184-45658ba8d8ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:27.261772 16686 system_pods.go:89] "csi-hostpathplugin-hglqr" [0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:27.261776 16686 system_pods.go:89] "etcd-addons-489802" [7f35387b-c7f1-4436-8369-77849b9c2383] Running
I0920 16:45:27.261780 16686 system_pods.go:89] "kube-apiserver-addons-489802" [9c4029f4-8e01-4d5d-a866-518e553ac713] Running
I0920 16:45:27.261784 16686 system_pods.go:89] "kube-controller-manager-addons-489802" [30219691-4d43-476d-8720-80aa4f2b6b54] Running
I0920 16:45:27.261791 16686 system_pods.go:89] "kube-ingress-dns-minikube" [1f722d5e-9dee-4b0e-8661-9c4181ea4f9b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0920 16:45:27.261795 16686 system_pods.go:89] "kube-proxy-xr4bt" [7a20cb9e-3e82-4bda-9529-7e024f9681a4] Running
I0920 16:45:27.261799 16686 system_pods.go:89] "kube-scheduler-addons-489802" [8b17a764-82bc-4003-8b0c-9d46c614e15d] Running
I0920 16:45:27.261805 16686 system_pods.go:89] "metrics-server-84c5f94fbc-txlrn" [b6d2625e-ba6e-44e1-b245-0edc2adaa243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:27.261809 16686 system_pods.go:89] "nvidia-device-plugin-daemonset-54hhx" [b022f644-f7de-4d74-aed4-63ad47ef0b71] Running
I0920 16:45:27.261815 16686 system_pods.go:89] "registry-66c9cd494c-7swkh" [1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 16:45:27.261820 16686 system_pods.go:89] "registry-proxy-ggl6q" [a467b141-5827-4440-b11f-9203739b4a10] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 16:45:27.261828 16686 system_pods.go:89] "snapshot-controller-56fcc65765-2hz6g" [0d531a52-cced-4b3d-adfd-5d62357591e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:27.261858 16686 system_pods.go:89] "snapshot-controller-56fcc65765-4l9hv" [eccfc252-ad9c-4b70-bb1c-d81a71214556] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:27.261868 16686 system_pods.go:89] "storage-provisioner" [1e04b7e0-a0fe-4e65-9ba5-63be2690da1d] Running
I0920 16:45:27.261877 16686 system_pods.go:126] duration metric: took 7.099706ms to wait for k8s-apps to be running ...
I0920 16:45:27.261887 16686 system_svc.go:44] waiting for kubelet service to be running ....
I0920 16:45:27.261932 16686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0920 16:45:27.276406 16686 system_svc.go:56] duration metric: took 14.508978ms WaitForService to wait for kubelet
I0920 16:45:27.276438 16686 kubeadm.go:582] duration metric: took 21.413312681s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:45:27.276460 16686 node_conditions.go:102] verifying NodePressure condition ...
I0920 16:45:27.280248 16686 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0920 16:45:27.280278 16686 node_conditions.go:123] node cpu capacity is 2
I0920 16:45:27.280291 16686 node_conditions.go:105] duration metric: took 3.825237ms to run NodePressure ...
I0920 16:45:27.280304 16686 start.go:241] waiting for startup goroutines ...
I0920 16:45:27.526718 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.649095 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:27.649421 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:27.737354 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:28.027233 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.150225 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:28.150730 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:28.236702 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:28.528434 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.650325 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:28.650405 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:28.740070 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:29.026096 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.149445 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:29.150058 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:29.237452 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:29.527135 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.649902 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:29.649932 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:29.737559 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:30.026698 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.150115 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:30.150769 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:30.238484 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:30.527374 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.648850 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:30.649272 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:30.738810 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:31.028473 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.150589 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:31.156282 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:31.237373 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:31.527393 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.649166 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:31.650780 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:31.736824 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:32.027837 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.152463 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:32.153143 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:32.237068 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:32.528272 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.649079 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:32.650818 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:32.738352 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:33.026553 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.149902 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:33.150275 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:33.237524 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:33.537491 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.649781 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:33.650261 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:33.737265 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:34.028817 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.150791 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:34.152125 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:34.237490 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:34.526864 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.649685 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:34.650181 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:34.736977 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:35.029888 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.150945 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:35.155795 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:35.240335 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:35.527786 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.654336 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:35.655062 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:35.737485 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:36.027635 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.151566 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:36.152493 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:36.238231 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:36.527246 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.655057 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:36.655723 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:36.738138 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:37.030365 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.150592 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:37.150821 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:37.236830 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:37.526749 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.650962 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:37.652318 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:37.738164 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:38.031402 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.155846 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:38.156510 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:38.252531 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:38.528674 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.655016 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:38.658754 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:38.739024 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:39.026715 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.151013 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:39.154202 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:39.238586 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:39.527713 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.649075 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:39.649203 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:39.737480 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:40.027567 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.150474 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:40.151696 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:40.250888 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:40.526616 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.652188 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:40.652389 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:40.736985 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:41.026770 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.150827 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:41.151842 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:41.237101 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:41.526185 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.650288 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:41.650519 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:41.737186 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:42.027683 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:42.149240 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:42.150504 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:42.491904 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:42.592635 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:42.650756 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:42.651320 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:42.737069 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:43.029825 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:43.149551 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:43.149935 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:43.237114 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:43.528788 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:43.650325 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:43.650461 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:43.739219 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:44.027085 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:44.150296 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:44.150650 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:44.238279 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:44.527675 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:44.649728 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:44.650268 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:44.737823 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:45.028181 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:45.150501 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:45.151145 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:45.237285 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:45.527586 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:45.649593 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:45.650452 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:45.738407 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:46.030564 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:46.150486 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:46.150734 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:46.237087 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:46.551259 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:46.651342 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:46.653245 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:46.737384 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:47.029654 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:47.150343 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:47.150347 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:47.238187 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:47.535430 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:47.650178 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:47.651863 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:47.739041 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:48.029210 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:48.150091 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:48.154252 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:48.240363 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:48.529142 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:48.653143 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:48.655833 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:48.738746 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:49.027666 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:49.150751 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:49.151834 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:49.236647 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:49.530861 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:49.651140 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:49.651675 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:49.740617 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:50.026729 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:50.159867 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:50.160090 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:50.239757 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:50.527622 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:50.654766 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:50.655361 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:50.737483 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:51.027995 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:51.149643 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:51.149801 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:51.237905 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:51.526411 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:51.649489 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:51.650326 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:51.738210 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:52.036253 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:52.149599 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:52.151253 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:52.237057 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:52.527569 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:52.648975 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:52.650153 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:52.737191 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:53.027592 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:53.150060 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:53.150479 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:53.236403 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:53.526504 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:53.649297 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:53.651436 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:53.737405 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:54.028487 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:54.150980 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:54.151321 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:54.237711 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:54.527354 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:54.650301 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:54.650677 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:54.737955 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:55.031032 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:55.149243 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:55.150181 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:55.238167 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:55.528915 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:55.649892 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:55.650313 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:55.738797 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:56.028783 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:56.151114 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:56.151294 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:56.237410 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:56.527498 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:56.650436 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:56.650776 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:56.736898 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:57.026952 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:57.149669 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:57.150915 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:57.237031 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:57.526939 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:57.648982 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:57.650547 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:57.737696 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:58.026729 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:58.150041 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:58.150968 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:58.237146 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:58.527288 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:58.651780 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:58.652013 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:58.738908 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:59.026605 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:59.149437 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:59.149648 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:59.237722 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:59.527090 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:59.650035 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:59.651041 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:45:59.737351 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:00.027912 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:00.558370 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:00.561620 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:00.563942 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:00.565779 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:00.661977 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:00.662874 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:00.739219 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:01.029865 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:01.154749 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:01.155165 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:01.237401 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:01.530045 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:01.649221 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:01.649554 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:01.740003 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:02.026763 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:02.150502 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:02.150590 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:02.236863 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:02.529068 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:02.650888 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:02.651000 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:02.750263 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:03.026716 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:03.149149 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:03.149545 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:03.237369 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:03.534553 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:03.650442 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:03.650862 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:03.737614 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:04.026913 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:04.149387 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:04.149593 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:04.243360 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:04.527336 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:04.650842 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:04.651139 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:04.739255 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:05.027878 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:05.150204 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:46:05.150545 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:05.244231 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:05.529349 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:05.652867 16686 kapi.go:107] duration metric: took 50.508229978s to wait for kubernetes.io/minikube-addons=registry ...
I0920 16:46:05.652925 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:05.739640 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:06.033981 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:06.149185 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:06.237046 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:06.528004 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:06.649435 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:06.895278 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:07.026949 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:07.149429 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:07.237034 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:07.526452 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:07.649544 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:07.737620 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:08.028390 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:08.150933 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:08.237962 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:08.529026 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:08.650034 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:08.737105 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:09.027687 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:09.149020 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:09.239286 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:09.529929 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:09.666377 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:09.746102 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:10.030699 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:10.155669 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:10.239033 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:10.530724 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:10.651556 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:10.737407 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:11.027890 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:11.149069 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:11.236960 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:11.527373 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:11.649887 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:11.737323 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:12.027469 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:12.149540 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:12.237298 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:12.527280 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:12.650565 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:12.750782 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:13.027210 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:13.149266 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:13.236795 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:13.527089 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:13.650076 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:13.739568 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:14.028427 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:14.150142 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:14.238716 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:14.529618 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:14.649719 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:14.737439 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:15.029527 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:15.149916 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:15.236871 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:15.527484 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:15.660993 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:15.737550 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:16.027986 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:16.149414 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:16.237560 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:16.528143 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:16.649180 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:16.749844 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:17.027012 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:17.149822 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:17.237094 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:17.527302 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:17.650815 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:17.737697 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:18.027958 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:18.151414 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:18.237081 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:18.755707 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:18.756298 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:18.756334 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:19.027579 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:19.149746 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:19.237870 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:19.532636 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:19.649362 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:19.743684 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:20.029394 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:20.152735 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:20.238771 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:20.528220 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:20.650381 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:20.739497 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:21.028952 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:21.149828 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:21.238039 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:21.532796 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:21.648825 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:21.736739 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:22.025994 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:22.149742 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:22.237902 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:22.526869 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:22.651053 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:22.754073 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:23.029507 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:23.150844 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:23.236975 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:23.530954 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:23.649940 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:23.737663 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:24.027816 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:24.149027 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:24.236905 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:24.528126 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:24.649610 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:24.737256 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:25.029079 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:25.168465 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:25.279560 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:25.529941 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:25.649862 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:25.738675 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:26.031710 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:26.149047 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:26.237178 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:26.527079 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:26.649467 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:26.737219 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:27.027260 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:27.150392 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:27.237951 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:27.526593 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:27.649815 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:27.738065 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:28.026169 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:28.150226 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:28.237640 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:28.526680 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:28.649544 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:28.737407 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:29.027688 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:29.150021 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:29.236763 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:29.563052 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:29.652576 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:29.739028 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:30.029796 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:30.150520 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:30.240233 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:30.526626 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:30.651044 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:30.739007 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:31.027062 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:31.541329 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:31.546535 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:31.546967 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:31.652149 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:31.736761 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:32.026342 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:32.149699 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:32.238624 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:32.526975 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:32.650436 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:32.740112 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:33.028897 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:33.150155 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:33.250978 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:33.528932 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:33.649886 16686 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 16:46:33.743165 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:34.028352 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:34.150042 16686 kapi.go:107] duration metric: took 1m19.005386454s to wait for app.kubernetes.io/name=ingress-nginx ...
I0920 16:46:34.237404 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:34.526686 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:34.740025 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:35.033014 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:35.241504 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:35.527579 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:35.738045 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:36.034900 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:36.242839 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:36.528649 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:36.738556 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:37.027713 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:37.237641 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:37.527114 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:37.736812 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:38.027753 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:38.240755 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:38.526552 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:38.739220 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:39.027014 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:39.240347 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:39.534783 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:39.739002 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:40.032069 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:40.239670 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:40.527751 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:40.742044 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:41.026894 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:41.237898 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:41.526185 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:46:41.737861 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:42.026935 16686 kapi.go:107] duration metric: took 1m25.505217334s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 16:46:42.236807 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:42.738034 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:43.237393 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:43.739267 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:44.237884 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:44.738051 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:45.236733 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:45.737720 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:46.236788 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:46.739281 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:47.237290 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:47.737521 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:48.237326 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:48.737915 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:49.238707 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:49.738314 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:50.237798 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:50.737959 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:51.237197 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:51.737289 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:52.236949 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:52.737530 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:53.237179 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:53.737635 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:54.237901 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:54.737648 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:55.238274 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:55.738085 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:56.237671 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:56.737704 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:57.237419 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:57.737353 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:58.237702 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:58.737197 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:59.237153 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:59.737583 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:00.238191 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:00.737084 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:01.237072 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:01.737245 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:02.237128 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:02.737215 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:03.237530 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:03.737290 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:04.237086 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:04.737817 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:05.237856 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:05.738321 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:06.237429 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:06.737202 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:07.236740 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:07.738137 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:08.237395 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:08.738090 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:09.237251 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:09.847229 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:10.237467 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:10.737639 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:11.237672 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:11.737856 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:12.237892 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:12.737947 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:13.236851 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:13.737127 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:14.236749 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:14.737645 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:15.240515 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:15.737944 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:16.236760 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:16.737628 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:17.237203 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:17.736930 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:18.237666 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:18.737293 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:19.253355 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:19.738180 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:20.239996 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:20.737102 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:21.239307 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:21.737634 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:22.237896 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:22.738438 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:23.237672 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:23.737184 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:24.239150 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:24.737464 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:25.237351 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:25.737539 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:26.237905 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:26.737559 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:27.237704 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:27.738056 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:28.237766 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:28.737159 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:29.237477 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:29.737337 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:30.238578 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:30.737543 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:31.237419 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:31.737583 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:32.237893 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:32.737619 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:33.237679 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:33.737168 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:34.237268 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:34.737264 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:35.237495 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:35.738039 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:36.238149 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:36.737649 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:37.237524 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:37.737017 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:38.238138 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:38.737568 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:39.237391 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:39.736477 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:40.238059 16686 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:47:40.738010 16686 kapi.go:107] duration metric: took 2m22.504874191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 16:47:40.740079 16686 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489802 cluster.
I0920 16:47:40.741424 16686 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 16:47:40.742789 16686 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 16:47:40.744449 16686 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I0920 16:47:40.745981 16686 addons.go:510] duration metric: took 2m34.882823136s for enable addons: enabled=[nvidia-device-plugin cloud-spanner inspektor-gadget ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I0920 16:47:40.746064 16686 start.go:246] waiting for cluster config update ...
I0920 16:47:40.746085 16686 start.go:255] writing updated cluster config ...
I0920 16:47:40.746667 16686 ssh_runner.go:195] Run: rm -f paused
I0920 16:47:40.832742 16686 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 16:47:40.834777 16686 out.go:177] * Done! kubectl is now configured to use "addons-489802" cluster and "default" namespace by default
==> CRI-O <==
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218037211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419218008762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b823f095-4010-439d-84ce-2019842c72ba name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218649776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.218730027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.220653955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f085ab55-fe88-4464-af73-edc0da3b0789 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.285192167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78573faf-182c-4676-8292-a95111294789 name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.285299806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78573faf-182c-4676-8292-a95111294789 name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.286200454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44e79bb9-a2d0-40c6-87fe-b9f09dfd7b46 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.287397725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419287328046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44e79bb9-a2d0-40c6-87fe-b9f09dfd7b46 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288018220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.namespace: kube-system,},},}" file="otel-collector/interceptors.go:62" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288092446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.288571936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[
string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115
f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver
-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.ku
bernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels
:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585d
c6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b
20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics
-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2ea
b1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f40ed99-aab0-4e0a-ae85-a62ea3efb94c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.298278601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f29bbe75-a164-4220-bf44-fda39f240527 name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.298428020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f29bbe75-a164-4220-bf44-fda39f240527 name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.302561148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5a21df8-a2ee-40fb-96de-d6df997f7100 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.303689460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419303658626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5a21df8-a2ee-40fb-96de-d6df997f7100 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.304484468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.304563397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.305278429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e65c9dbf-df2c-4711-853e-7fef800c1a57 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.369240188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=661ba300-a9d1-4e7c-9b6f-ae49cd336a4c name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.369407666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=661ba300-a9d1-4e7c-9b6f-ae49cd336a4c name=/runtime.v1.RuntimeService/Version
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.375101511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c84d685c-7246-430e-9d8d-bfe848e890cd name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.378135343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726851419378088365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c84d685c-7246-430e-9d8d-bfe848e890cd name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379090381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379169370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 16:56:59 addons-489802 crio[664]: time="2024-09-20 16:56:59.379870987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3b98df31c510e2b7d8467304c108770bfabad7ebb2494f12313d8f912b2482c,PodSandboxId:ddccd18e28f19bcd554a80347c0802f4ddf6d7bad08d4b2ac6f27eb3e102b20d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726851396918964948,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c34572d-1118-4bb3-8265-b67b3104bc59,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7855191edf9dac6a02cb338d5c06ae79feb999af96c1205e987920277065d079,PodSandboxId:94183eceae2996500c16f3e0182cde507f6cd239b42d7e04c4be2ec3899c6a6f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851392690228669,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3fe17849-a80b-47ae-adf6-77c01273238d,},Annotations:map[string]string{io.kubernetes.container.
hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e1bcda36f90d21b7130116c5ea6b1229ea9a0700e45bbff41b308db6dbc33c,PodSandboxId:e98930c60622fe0f7bdc4bf6d6d08bc7526fc617d34122970d0d7182bd9138e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6fd955f66c231c1a946653170d096a28ac2b2052a02080c0b84ec082a07f7d12,State:CONTAINER_EXITED,CreatedAt:1726851385595495335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a52579e-aa38-4262-8d40-663925dc3ec1,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595
ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5038dfb91d9e8dca86d31517d2006b94b6c908631d5f20394c86871e56d1d08,PodSandboxId:21621b7034dfd4946db396ab2ecb322c86b682626f5c0285738a87ba88bfbf23,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1726851379562446175,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 5e840cda-1451-4279-88ae-f9ba29c00bec,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce,PodSandboxId:66f4ad3477a6cc6a00655cc193d28db01097870bd3585db50c33f9e7cc96f8cf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726850860245098258,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wzvr2,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 9688d654-b2f1-4e67-b21f-737c57cb6d4f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3789e1deba3b5c9ce3ea828aadfae5635edc167c040fa930464707e91be53341,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726850801062921486,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-
hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2170a5568649b01c67765f29e8fdff73695d347ea33e872cffc2746fb679bb35,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726850799105793588,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kube
rnetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d20afd16f541ca333f7bd36f8da7782ea9a69ae24093ca113e872faea4de2b70,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726850796453830227,Labels:map[string]string{io.kubernetes.contain
er.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a74c2b2a3ee355c5e919249c37a775e1de74552c52cbd119a7bcde2f5ef8ff6,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726850794957838931,Labels:map[string]s
tring{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c24274c3f958be71bf70e73d568bc6a4bb1bb6c65a5881e3fc34fefcc9fcf2,PodSandboxId:a115fb5bcdd70dd9eaddc86f186e4f6e55036b28dc0a72cf68edf7dae1530096,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38
d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726850793113655585,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-79mpt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f93f931b-28ea-417f-9956-b9dce76ebe38,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4141de5542403c5675e25ca0d8c438d502a45b49559475f
46261d4f34feaa611,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726850784957523000,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:0580426e8d27f7c106f5d251425428617a0b35941fbdbeb0cef1280abf386f6c,PodSandboxId:7a34dc197c7221a0f7968767406de9d9088af78de12ad00aa7c9e7602d006f7e,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726850782617829339,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fc733e6-4135-418b-a554-490bd25dabe7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:8af8ae710b7bb0d44edc885792516e5b3d3019d460fe9988723ecff6c6361291,PodSandboxId:91d588b9442b7a5883fc1e6ec70b3073b793fe9fa8e955c8f9ca0da9ba64c130,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726850780862773926,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85755d16-e8fa-4878-9184-45658ba8d8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c378811c5ad20a87fbb4de0cc32b2c86dc1e847531f104f45e8945f74db49ebf,PodSandboxId:a00df88c1f82fc3492928da8501518bce8b0f2ccb5d6274f59769e673a724852,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726850778878102572,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-hglqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aeb8bcc-1f9f-40f6-8aa1-4822a64115f2,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e85742448a79b5c1857677ad2a134c6852e9495035cbf9c25e3a7521dd6bb2,PodSandboxId:61840f6d138dd81b5b65efdfcdb4db6fc37465b1ee033b0bee2142714f07f4ae,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850777385583505,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b6mtt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd711ef0-0010-45af-a950-49c84a55c942,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a9b75a453cd62d16bb90bc10f20dd616029cfd1dbb3300fdd9d3b272d5c1367,PodSandboxId:85dbe34d0b929d7356ea58dd7954b02069f214007674d69cc2313ed32dff2fc1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726850776662845564,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7lw7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52fba05c-46c5-4916-b5e4-386dadb0ae61,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a52dd73b64e18e4b092d4faca5a851b5873993d6017437104456eb51f3e1465a,PodSandboxId:1c6c09297d2606b08525d2ccba830943316f2d00ad82e5c753cf47556db96a02,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774360560772,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4l9hv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccfc252-ad9c-4b70-bb1c-d81a71214556,},Annotations:map[string]string{io.kubernetes
.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6aa4419694f45aa6bf3df3b53e48fb2c8f23061822c55c47d7081f7e546a623,PodSandboxId:57c48f1670b2a1a06e6ff7871e9504d83d44e44f6b4cc4c9e901990d02cd4cd3,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726850774210107694,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-2hz6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d531a52
-cced-4b3d-adfd-5d62357591e8,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0690e87ddb4f9357eefd739e00c9cec1ec022eda1379279b535ba4678c33b26,PodSandboxId:36aedadeb2582fe5df950ad8776d82e03104ab51a608a29ba00e9113b19e678e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1726850772239877059,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-rhmqb,io.kuberne
tes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e5f1d3f8-1767-4ad2-b5b8-eb5bf18bc163,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a0d036505e72ed6b62c09226aa4d219c30e6e162e73ebffc595f568b216931d,PodSandboxId:1ae7bada2f668e2292fc48d3426bfa34e41215d4336864f62a4f90b4ee95709f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726850737987301783,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.po
d.name: metrics-server-84c5f94fbc-txlrn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d2625e-ba6e-44e1-b245-0edc2adaa243,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09902a512e79f577d8d4f2a8784f5484d2134b53bde422c6322893204f62b00a,PodSandboxId:3843f8105dc892830a295ca6b48e8f9f1e0a84e15e2eab1bd63dabf67e0567e1,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc2
9,State:CONTAINER_RUNNING,CreatedAt:1726850735299111720,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f722d5e-9dee-4b0e-8661-9c4181ea4f9b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d,PodSandboxId:11b2a45f795d401fe4c78cf74478d3d702eff22fef4bdd814d8198ee5072d604,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726850713649115674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e04b7e0-a0fe-4e65-9ba5-63be2690da1d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06,PodSandboxId:cfda686abf7f1fef69de6a34f633f44ac3c87637d6ec92d05dc4a45a4d5652b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726850710125894103,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqbzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734f1782-975a-486b-adf3-32f60c376a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7,PodSandboxId:b53a284c395cfb6bdea6622b664327da6733b43d0375a7570cfa3dac443563e5,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726850707153952752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xr4bt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a20cb9e-3e82-4bda-9529-7e024f9681a4,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4,PodSandboxId:0ccdde3d3e8e30fec62b1f315de346cf5989b81e93276bfcf9792ae014efb9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726850695786767603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db84d1368c7024c014f2f2f0d973aae,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264,PodSandboxId:b3c515c903cd8c54cc3829530f8702fa82f07287a4bcae50433ffb0e6100c34b,Metadata:&ContainerMetadata{Name:kube-apiserver
,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726850695761844946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de814a9694fb61ae23ac46f9b9deb6e7,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23,PodSandboxId:ce3e5a61bc6e6a8044b701e61a79b033d814fb58851347acc4b4eaab63045047,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726850695741523021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016cfe34770e4cbd59f73407149e44ff,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e,PodSandboxId:c9a4930506bbb11794aa02ab9a68cfe8370b91453dd7ab2cce5eac61a155cacf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e
4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726850695699198150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50faea81a2001503e00d2a0be1ceba9e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76012d66-539a-43b6-9bd3-5f55aff72e04 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b3b98df31c510 docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 22 seconds ago Running nginx 0 ddccd18e28f19 nginx
7855191edf9da a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824 26 seconds ago Exited helper-pod 0 94183eceae299 helper-pod-delete-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98
88e1bcda36f90 docker.io/library/busybox@sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f 33 seconds ago Exited busybox 0 e98930c60622f test-local-path
e5038dfb91d9e docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee 39 seconds ago Exited helper-pod 0 21621b7034dfd helper-pod-create-pvc-b8225ab7-cae8-4ab5-8ca1-5e74b7712f98
1c1fd10705c64 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b 9 minutes ago Running gcp-auth 0 66f4ad3477a6c gcp-auth-89d5ffd79-wzvr2
3789e1deba3b5 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 a00df88c1f82f csi-hostpathplugin-hglqr
2170a5568649b registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 10 minutes ago Running csi-provisioner 0 a00df88c1f82f csi-hostpathplugin-hglqr
d20afd16f541c registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 10 minutes ago Running liveness-probe 0 a00df88c1f82f csi-hostpathplugin-hglqr
2a74c2b2a3ee3 registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 10 minutes ago Running hostpath 0 a00df88c1f82f csi-hostpathplugin-hglqr
29c24274c3f95 registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6 10 minutes ago Running controller 0 a115fb5bcdd70 ingress-nginx-controller-bc57996ff-79mpt
4141de5542403 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc 10 minutes ago Running node-driver-registrar 0 a00df88c1f82f csi-hostpathplugin-hglqr
0580426e8d27f registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 10 minutes ago Running csi-attacher 0 7a34dc197c722 csi-hostpath-attacher-0
8af8ae710b7bb registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 10 minutes ago Running csi-resizer 0 91d588b9442b7 csi-hostpath-resizer-0
c378811c5ad20 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 10 minutes ago Running csi-external-health-monitor-controller 0 a00df88c1f82f csi-hostpathplugin-hglqr
a5e85742448a7 ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242 10 minutes ago Exited patch 1 61840f6d138dd ingress-nginx-admission-patch-b6mtt
5a9b75a453cd6 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012 10 minutes ago Exited create 0 85dbe34d0b929 ingress-nginx-admission-create-h7lw7
a52dd73b64e18 registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 10 minutes ago Running volume-snapshot-controller 0 1c6c09297d260 snapshot-controller-56fcc65765-4l9hv
c6aa4419694f4 registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 10 minutes ago Running volume-snapshot-controller 0 57c48f1670b2a snapshot-controller-56fcc65765-2hz6g
b0690e87ddb4f docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 10 minutes ago Running local-path-provisioner 0 36aedadeb2582 local-path-provisioner-86d989889c-rhmqb
3a0d036505e72 registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a 11 minutes ago Running metrics-server 0 1ae7bada2f668 metrics-server-84c5f94fbc-txlrn
09902a512e79f gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 11 minutes ago Running minikube-ingress-dns 0 3843f8105dc89 kube-ingress-dns-minikube
5a981c68e9271 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 11 minutes ago Running storage-provisioner 0 11b2a45f795d4 storage-provisioner
70c74f4f1e0bd c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 11 minutes ago Running coredns 0 cfda686abf7f1 coredns-7c65d6cfc9-nqbzq
7c60a90d5ed29 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561 11 minutes ago Running kube-proxy 0 b53a284c395cf kube-proxy-xr4bt
44c347dc4cb23 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1 12 minutes ago Running kube-controller-manager 0 0ccdde3d3e8e3 kube-controller-manager-addons-489802
79fb233450407 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee 12 minutes ago Running kube-apiserver 0 b3c515c903cd8 kube-apiserver-addons-489802
5ebda0675cfbe 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4 12 minutes ago Running etcd 0 ce3e5a61bc6e6 etcd-addons-489802
53631bbb5fc19 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b 12 minutes ago Running kube-scheduler 0 c9a4930506bbb kube-scheduler-addons-489802
==> coredns [70c74f4f1e0bde75fc553a034aa664a515c218d2b72725850921f92314b6ec06] <==
[INFO] 127.0.0.1:51784 - 8829 "HINFO IN 5160120906343044549.4812313304468353436. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012102619s
[INFO] 10.244.0.7:49904 - 44683 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000739291s
[INFO] 10.244.0.7:49904 - 13446 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000838879s
[INFO] 10.244.0.7:37182 - 17696 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137198s
[INFO] 10.244.0.7:37182 - 29725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120771s
[INFO] 10.244.0.7:40785 - 12767 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012186s
[INFO] 10.244.0.7:40785 - 24273 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223065s
[INFO] 10.244.0.7:54049 - 5032 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122634s
[INFO] 10.244.0.7:54049 - 51625 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075286s
[INFO] 10.244.0.7:57416 - 8811 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080693s
[INFO] 10.244.0.7:57416 - 56406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038363s
[INFO] 10.244.0.7:59797 - 29819 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040968s
[INFO] 10.244.0.7:59797 - 16249 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000038791s
[INFO] 10.244.0.7:39368 - 3897 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045812s
[INFO] 10.244.0.7:39368 - 53818 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034439s
[INFO] 10.244.0.7:57499 - 43541 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049958s
[INFO] 10.244.0.7:57499 - 15379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036533s
[INFO] 10.244.0.21:51858 - 31367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000847603s
[INFO] 10.244.0.21:33579 - 64948 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139841s
[INFO] 10.244.0.21:48527 - 40604 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280976s
[INFO] 10.244.0.21:52717 - 13930 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169344s
[INFO] 10.244.0.21:58755 - 3796 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147676s
[INFO] 10.244.0.21:51813 - 12818 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082135s
[INFO] 10.244.0.21:51795 - 17985 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004530788s
[INFO] 10.244.0.21:47998 - 23926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659458s
==> describe nodes <==
Name: addons-489802
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-489802
kubernetes.io/os=linux
minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
minikube.k8s.io/name=addons-489802
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T16_45_01_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-489802
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-489802"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 16:44:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-489802
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 16:56:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 16:56:43 +0000 Fri, 20 Sep 2024 16:44:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 16:56:43 +0000 Fri, 20 Sep 2024 16:44:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 16:56:43 +0000 Fri, 20 Sep 2024 16:44:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 16:56:43 +0000 Fri, 20 Sep 2024 16:45:01 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.89
Hostname: addons-489802
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: fd813db21ac84502aef251a6893e0027
System UUID: fd813db2-1ac8-4502-aef2-51a6893e0027
Boot ID: ed0a3698-272d-483a-ba56-acac4def529a
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (19 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m16s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30s
gcp-auth gcp-auth-89d5ffd79-wzvr2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
ingress-nginx ingress-nginx-controller-bc57996ff-79mpt 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 11m
kube-system coredns-7c65d6cfc9-nqbzq 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-hglqr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-addons-489802 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 11m
kube-system kube-apiserver-addons-489802 250m (12%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-addons-489802 200m (10%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-xr4bt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-addons-489802 100m (5%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-txlrn 100m (5%) 0 (0%) 200Mi (5%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-2hz6g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-4l9hv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-rhmqb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (12%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m (x2 over 11m) kubelet Node addons-489802 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x2 over 11m) kubelet Node addons-489802 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x2 over 11m) kubelet Node addons-489802 status is now: NodeHasSufficientPID
Normal NodeReady 11m kubelet Node addons-489802 status is now: NodeReady
Normal RegisteredNode 11m node-controller Node addons-489802 event: Registered Node addons-489802 in Controller
==> dmesg <==
[ +0.174126] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
[ +4.891583] kauditd_printk_skb: 98 callbacks suppressed
[ +5.138076] kauditd_printk_skb: 145 callbacks suppressed
[ +10.203071] kauditd_printk_skb: 70 callbacks suppressed
[ +17.983286] kauditd_printk_skb: 2 callbacks suppressed
[Sep20 16:46] kauditd_printk_skb: 4 callbacks suppressed
[ +13.042505] kauditd_printk_skb: 42 callbacks suppressed
[ +6.124032] kauditd_printk_skb: 45 callbacks suppressed
[ +10.494816] kauditd_printk_skb: 64 callbacks suppressed
[ +5.981422] kauditd_printk_skb: 11 callbacks suppressed
[ +5.234675] kauditd_printk_skb: 34 callbacks suppressed
[Sep20 16:47] kauditd_printk_skb: 28 callbacks suppressed
[ +7.543099] kauditd_printk_skb: 9 callbacks suppressed
[Sep20 16:48] kauditd_printk_skb: 2 callbacks suppressed
[Sep20 16:49] kauditd_printk_skb: 28 callbacks suppressed
[Sep20 16:52] kauditd_printk_skb: 28 callbacks suppressed
[Sep20 16:55] kauditd_printk_skb: 28 callbacks suppressed
[ +5.170883] kauditd_printk_skb: 6 callbacks suppressed
[ +5.280461] kauditd_printk_skb: 17 callbacks suppressed
[Sep20 16:56] kauditd_printk_skb: 24 callbacks suppressed
[ +10.067719] kauditd_printk_skb: 13 callbacks suppressed
[ +5.043461] kauditd_printk_skb: 21 callbacks suppressed
[ +5.256575] kauditd_printk_skb: 14 callbacks suppressed
[ +9.179843] kauditd_printk_skb: 27 callbacks suppressed
[ +15.573697] kauditd_printk_skb: 7 callbacks suppressed
==> etcd [5ebda0675cfbe9e7b3e6c1ca40351339db78cf3954608a12cc779850ee452a23] <==
{"level":"warn","ts":"2024-09-20T16:46:31.521799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.668395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:46:31.522054Z","caller":"traceutil/trace.go:171","msg":"trace[655563733] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"298.968755ms","start":"2024-09-20T16:46:31.223072Z","end":"2024-09-20T16:46:31.522041Z","steps":["trace[655563733] 'agreement among raft nodes before linearized reading' (duration: 298.302775ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:46:31.522572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.514745ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:46:31.522662Z","caller":"traceutil/trace.go:171","msg":"trace[397127513] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1072; }","duration":"285.60775ms","start":"2024-09-20T16:46:31.237046Z","end":"2024-09-20T16:46:31.522653Z","steps":["trace[397127513] 'agreement among raft nodes before linearized reading' (duration: 285.506056ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:46:31.523097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.094744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:46:31.521069Z","caller":"traceutil/trace.go:171","msg":"trace[1366548052] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"451.994343ms","start":"2024-09-20T16:46:31.069059Z","end":"2024-09-20T16:46:31.521053Z","steps":["trace[1366548052] 'process raft request' (duration: 450.539479ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:46:31.523185Z","caller":"traceutil/trace.go:171","msg":"trace[1958014936] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1072; }","duration":"389.189661ms","start":"2024-09-20T16:46:31.133988Z","end":"2024-09-20T16:46:31.523178Z","steps":["trace[1958014936] 'agreement among raft nodes before linearized reading' (duration: 388.742689ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:46:31.523315Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.133949Z","time spent":"389.346336ms","remote":"127.0.0.1:44644","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2024-09-20T16:46:31.523518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:46:31.069043Z","time spent":"454.199637ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1066 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2024-09-20T16:46:34.697548Z","caller":"traceutil/trace.go:171","msg":"trace[1773063632] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"138.671352ms","start":"2024-09-20T16:46:34.558854Z","end":"2024-09-20T16:46:34.697526Z","steps":["trace[1773063632] 'process raft request' (duration: 138.455302ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:47:09.828412Z","caller":"traceutil/trace.go:171","msg":"trace[1350480991] linearizableReadLoop","detail":"{readStateIndex:1234; appliedIndex:1233; }","duration":"107.953401ms","start":"2024-09-20T16:47:09.720376Z","end":"2024-09-20T16:47:09.828329Z","steps":["trace[1350480991] 'read index received' (duration: 107.782449ms)","trace[1350480991] 'applied index is now lower than readState.Index' (duration: 170.357µs)"],"step_count":2}
{"level":"info","ts":"2024-09-20T16:47:09.828591Z","caller":"traceutil/trace.go:171","msg":"trace[1677279500] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"108.710691ms","start":"2024-09-20T16:47:09.719867Z","end":"2024-09-20T16:47:09.828578Z","steps":["trace[1677279500] 'process raft request' (duration: 108.343763ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:47:09.828834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.468877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:47:09.828877Z","caller":"traceutil/trace.go:171","msg":"trace[823583891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"108.573167ms","start":"2024-09-20T16:47:09.720295Z","end":"2024-09-20T16:47:09.828868Z","steps":["trace[823583891] 'agreement among raft nodes before linearized reading' (duration: 108.427543ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:54:56.686206Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
{"level":"info","ts":"2024-09-20T16:54:56.732913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"45.95642ms","hash":3143060453,"current-db-size-bytes":6316032,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3231744,"current-db-size-in-use":"3.2 MB"}
{"level":"info","ts":"2024-09-20T16:54:56.733061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3143060453,"revision":1494,"compact-revision":-1}
{"level":"info","ts":"2024-09-20T16:55:52.021318Z","caller":"traceutil/trace.go:171","msg":"trace[2100115174] transaction","detail":"{read_only:false; response_revision:2018; number_of_response:1; }","duration":"379.66185ms","start":"2024-09-20T16:55:51.641590Z","end":"2024-09-20T16:55:52.021252Z","steps":["trace[2100115174] 'process raft request' (duration: 379.545504ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:55:52.021786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T16:55:51.641574Z","time spent":"380.006071ms","remote":"127.0.0.1:44742","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1986 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"info","ts":"2024-09-20T16:55:52.022293Z","caller":"traceutil/trace.go:171","msg":"trace[35214985] linearizableReadLoop","detail":"{readStateIndex:2175; appliedIndex:2174; }","duration":"196.804789ms","start":"2024-09-20T16:55:51.825473Z","end":"2024-09-20T16:55:52.022278Z","steps":["trace[35214985] 'read index received' (duration: 196.433504ms)","trace[35214985] 'applied index is now lower than readState.Index' (duration: 370.887µs)"],"step_count":2}
{"level":"info","ts":"2024-09-20T16:55:52.022475Z","caller":"traceutil/trace.go:171","msg":"trace[1790896376] transaction","detail":"{read_only:false; response_revision:2019; number_of_response:1; }","duration":"211.987025ms","start":"2024-09-20T16:55:51.810476Z","end":"2024-09-20T16:55:52.022463Z","steps":["trace[1790896376] 'process raft request' (duration: 211.729812ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:55:52.022604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.118957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:55:52.022641Z","caller":"traceutil/trace.go:171","msg":"trace[1794876456] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2019; }","duration":"197.165972ms","start":"2024-09-20T16:55:51.825467Z","end":"2024-09-20T16:55:52.022633Z","steps":["trace[1794876456] 'agreement among raft nodes before linearized reading' (duration: 197.096047ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:56:32.273552Z","caller":"traceutil/trace.go:171","msg":"trace[1806753974] transaction","detail":"{read_only:false; response_revision:2278; number_of_response:1; }","duration":"138.283014ms","start":"2024-09-20T16:56:32.135255Z","end":"2024-09-20T16:56:32.273538Z","steps":["trace[1806753974] 'process raft request' (duration: 137.851209ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:56:36.295953Z","caller":"traceutil/trace.go:171","msg":"trace[1488171244] transaction","detail":"{read_only:false; response_revision:2301; number_of_response:1; }","duration":"162.589325ms","start":"2024-09-20T16:56:36.131622Z","end":"2024-09-20T16:56:36.294211Z","steps":["trace[1488171244] 'process raft request' (duration: 162.248073ms)"],"step_count":1}
==> gcp-auth [1c1fd10705c644580fdef2fc3075d7c9349c8e4b44d3899910dd41e40c87e2ce] <==
2024/09/20 16:47:40 Ready to write response ...
2024/09/20 16:47:43 Ready to marshal response ...
2024/09/20 16:47:43 Ready to write response ...
2024/09/20 16:47:43 Ready to marshal response ...
2024/09/20 16:47:43 Ready to write response ...
2024/09/20 16:55:47 Ready to marshal response ...
2024/09/20 16:55:47 Ready to write response ...
2024/09/20 16:55:47 Ready to marshal response ...
2024/09/20 16:55:47 Ready to write response ...
2024/09/20 16:55:47 Ready to marshal response ...
2024/09/20 16:55:47 Ready to write response ...
2024/09/20 16:55:57 Ready to marshal response ...
2024/09/20 16:55:57 Ready to write response ...
2024/09/20 16:56:16 Ready to marshal response ...
2024/09/20 16:56:16 Ready to write response ...
2024/09/20 16:56:16 Ready to marshal response ...
2024/09/20 16:56:16 Ready to write response ...
2024/09/20 16:56:23 Ready to marshal response ...
2024/09/20 16:56:23 Ready to write response ...
2024/09/20 16:56:28 Ready to marshal response ...
2024/09/20 16:56:28 Ready to write response ...
2024/09/20 16:56:29 Ready to marshal response ...
2024/09/20 16:56:29 Ready to write response ...
2024/09/20 16:56:50 Ready to marshal response ...
2024/09/20 16:56:50 Ready to write response ...
==> kernel <==
16:56:59 up 12 min, 0 users, load average: 0.84, 0.55, 0.42
Linux addons-489802 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [79fb233450407c6fdf879eb55124a5840bf49aaa572c10f7add06512d38df264] <==
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0920 16:46:13.039223 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0920 16:46:13.040532 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0920 16:46:42.591244 1 handler_proxy.go:99] no RequestInfo found in the context
E0920 16:46:42.591968 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
E0920 16:46:42.592145 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E0920 16:46:42.594035 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
E0920 16:46:42.599675 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.82.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.82.249:443: connect: connection refused" logger="UnhandledError"
I0920 16:46:42.683134 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0920 16:47:19.399216 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E0920 16:47:19.400809 1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E0920 16:47:19.402902 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E0920 16:47:19.404104 1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E0920 16:47:19.412494 1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="13.458229ms" method="GET" path="/apis/apps/v1/namespaces/yakd-dashboard/replicasets/yakd-dashboard-67d98fc6b" result=null
I0920 16:55:47.034722 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.200.88"}
I0920 16:56:11.192249 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0920 16:56:12.228711 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0920 16:56:29.568621 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0920 16:56:29.873321 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.88.195"}
I0920 16:56:40.306913 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [44c347dc4cb2326d9ce7eef959abf86dcaee69ecf824e59fbe44600500e8a0f4] <==
I0920 16:55:52.978413 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="125.342µs"
I0920 16:55:58.670799 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.445µs"
I0920 16:55:59.411189 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.296µs"
I0920 16:56:08.808071 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
I0920 16:56:09.550858 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
E0920 16:56:12.230847 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:56:12.972141 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-489802"
W0920 16:56:13.260946 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:13.261098 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:56:16.343580 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:16.343635 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:56:20.746066 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:20.746185 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:56:21.328553 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
W0920 16:56:32.273073 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:32.273146 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:56:35.266525 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0920 16:56:35.266621 1 shared_informer.go:320] Caches are synced for resource quota
I0920 16:56:35.842831 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0920 16:56:35.842926 1 shared_informer.go:320] Caches are synced for garbage collector
I0920 16:56:43.368488 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-489802"
W0920 16:56:54.448783 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:54.448846 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:56:57.871425 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.548µs"
I0920 16:57:00.043111 1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
==> kube-proxy [7c60a90d5ed294c1a5015ea6f6b5c5259e8d437a6b5dd0f9dd758bb62d91c7b7] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0920 16:45:07.927443 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0920 16:45:07.961049 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.89"]
E0920 16:45:07.961134 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 16:45:08.130722 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0920 16:45:08.130762 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0920 16:45:08.130790 1 server_linux.go:169] "Using iptables Proxier"
I0920 16:45:08.135726 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 16:45:08.136036 1 server.go:483] "Version info" version="v1.31.1"
I0920 16:45:08.136059 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 16:45:08.137263 1 config.go:199] "Starting service config controller"
I0920 16:45:08.137318 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 16:45:08.137400 1 config.go:105] "Starting endpoint slice config controller"
I0920 16:45:08.137405 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 16:45:08.137933 1 config.go:328] "Starting node config controller"
I0920 16:45:08.137953 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 16:45:08.237708 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0920 16:45:08.237750 1 shared_informer.go:320] Caches are synced for service config
I0920 16:45:08.239006 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [53631bbb5fc199153283953dffaf83c3d2a2b4cdbda98ab81770b42af5dfe30e] <==
W0920 16:44:58.228924 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:58.230396 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:58.228968 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:58.230429 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.045447 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 16:44:59.045496 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.126233 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:59.126435 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.147240 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0920 16:44:59.147292 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.277135 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:59.278460 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.296223 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 16:44:59.296273 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0920 16:44:59.348771 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0920 16:44:59.348828 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.368238 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0920 16:44:59.368290 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.411207 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0920 16:44:59.411256 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.475030 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:59.475087 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:59.605643 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 16:44:59.605806 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0920 16:45:02.104787 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556538 1210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfr8\" (UniqueName: \"kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8\") pod \"a467b141-5827-4440-b11f-9203739b4a10\" (UID: \"a467b141-5827-4440-b11f-9203739b4a10\") "
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556656 1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-792t6\" (UniqueName: \"kubernetes.io/projected/5f951af3-0fc4-4606-9f2e-556adaa494f1-kube-api-access-792t6\") on node \"addons-489802\" DevicePath \"\""
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556683 1210 reconciler_common.go:281] "operationExecutor.UnmountDevice started for volume \"pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f\") on node \"addons-489802\" "
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.556693 1210 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5f951af3-0fc4-4606-9f2e-556adaa494f1-gcp-creds\") on node \"addons-489802\" DevicePath \"\""
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.561300 1210 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f") on node "addons-489802"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.561928 1210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8" (OuterVolumeSpecName: "kube-api-access-wdfr8") pod "a467b141-5827-4440-b11f-9203739b4a10" (UID: "a467b141-5827-4440-b11f-9203739b4a10"). InnerVolumeSpecName "kube-api-access-wdfr8". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.575555 1210 scope.go:117] "RemoveContainer" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.632689 1210 scope.go:117] "RemoveContainer" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.633438 1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": container with ID starting with f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9 not found: ID does not exist" containerID="f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.633470 1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9"} err="failed to get container status \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": rpc error: code = NotFound desc = could not find container \"f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9\": container with ID starting with f231def9a616ea3d01057454c6d7cb6dd46e76a66f252dde0e7010d593debfb9 not found: ID does not exist"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.633493 1210 scope.go:117] "RemoveContainer" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.660255 1210 reconciler_common.go:288] "Volume detached for volume \"pvc-67f5ab3b-e6d7-4c60-8631-6b4e746602db\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^54ac60df-7771-11ef-b51a-7ae5a69c722f\") on node \"addons-489802\" DevicePath \"\""
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.660320 1210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wdfr8\" (UniqueName: \"kubernetes.io/projected/a467b141-5827-4440-b11f-9203739b4a10-kube-api-access-wdfr8\") on node \"addons-489802\" DevicePath \"\""
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669016 1210 scope.go:117] "RemoveContainer" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.669701 1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": container with ID starting with feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c not found: ID does not exist" containerID="feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669745 1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c"} err="failed to get container status \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": rpc error: code = NotFound desc = could not find container \"feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c\": container with ID starting with feb881ba69c1a13e19d50fc82b036cba32648069f91f79b6395af21eecc1840c not found: ID does not exist"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.669771 1210 scope.go:117] "RemoveContainer" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.705007 1210 scope.go:117] "RemoveContainer" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
Sep 20 16:56:58 addons-489802 kubelet[1210]: E0920 16:56:58.705684 1210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": container with ID starting with 9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355 not found: ID does not exist" containerID="9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.705756 1210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355"} err="failed to get container status \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": rpc error: code = NotFound desc = could not find container \"9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355\": container with ID starting with 9151dfde6abf4371e401cb370de3d1093860959a7db56ed4562c36fda613a355 not found: ID does not exist"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.887046 1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4" path="/var/lib/kubelet/pods/1e3cfba8-c77f-46f3-b6b1-46c7a36ae3a4/volumes"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.887814 1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f951af3-0fc4-4606-9f2e-556adaa494f1" path="/var/lib/kubelet/pods/5f951af3-0fc4-4606-9f2e-556adaa494f1/volumes"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.888567 1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a467b141-5827-4440-b11f-9203739b4a10" path="/var/lib/kubelet/pods/a467b141-5827-4440-b11f-9203739b4a10/volumes"
Sep 20 16:56:58 addons-489802 kubelet[1210]: I0920 16:56:58.889706 1210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5fb0889-ead9-403c-b47a-ce4e44c73c83" path="/var/lib/kubelet/pods/c5fb0889-ead9-403c-b47a-ce4e44c73c83/volumes"
Sep 20 16:57:00 addons-489802 kubelet[1210]: I0920 16:57:00.388190 1210 csi_plugin.go:191] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io
==> storage-provisioner [5a981c68e927108571692d174ebc0cf47e600882543d6dd401c23cbcd805d49d] <==
I0920 16:45:14.933598 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 16:45:15.129203 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 16:45:15.129288 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 16:45:15.469563 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 16:45:15.471781 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
I0920 16:45:15.471465 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47834956-e67b-4561-9f20-a2c3f45edc3a", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499 became leader
I0920 16:45:15.594691 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489802_9f119035-fb3e-4caa-852b-e718c04f6499!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489802 -n addons-489802
helpers_test.go:261: (dbg) Run: kubectl --context addons-489802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr: exit status 1 (82.2534ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-489802/192.168.39.89
Start Time: Fri, 20 Sep 2024 16:47:43 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.22
IPs:
IP: 10.244.0.22
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh4vn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-lh4vn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m18s default-scheduler Successfully assigned default/busybox to addons-489802
Normal Pulling 7m45s (x4 over 9m18s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m44s (x4 over 9m18s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning Failed 7m44s (x4 over 9m18s) kubelet Error: ErrImagePull
Warning Failed 7m30s (x6 over 9m17s) kubelet Error: ImagePullBackOff
Normal BackOff 4m12s (x20 over 9m17s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-h7lw7" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-b6mtt" not found
Error from server (NotFound): pods "csi-hostpath-attacher-0" not found
Error from server (NotFound): pods "csi-hostpath-resizer-0" not found
Error from server (NotFound): pods "csi-hostpathplugin-hglqr" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-489802 describe pod busybox ingress-nginx-admission-create-h7lw7 ingress-nginx-admission-patch-b6mtt csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-hglqr: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.32s)