=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.247656ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.02051547s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004235551s
addons_test.go:338: (dbg) Run: kubectl --context addons-446299 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080240833s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-446299 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p addons-446299 ip
2024/09/20 18:24:10 [DEBUG] GET http://192.168.39.237:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p addons-446299 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-446299 -n addons-446299
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-446299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-446299 logs -n 25: (1.431002418s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | |
| | -p download-only-675466 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| delete | -p download-only-675466 | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| start | -o=json --download-only | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | |
| | -p download-only-363869 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| delete | -p download-only-363869 | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| delete | -p download-only-675466 | download-only-675466 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| delete | -p download-only-363869 | download-only-363869 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| start | --download-only -p | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | |
| | binary-mirror-747965 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:39359 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-747965 | binary-mirror-747965 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:12 UTC |
| addons | enable dashboard -p | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | |
| | addons-446299 | | | | | |
| addons | disable dashboard -p | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | |
| | addons-446299 | | | | | |
| start | -p addons-446299 --wait=true | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:12 UTC | 20 Sep 24 18:14 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| addons | enable headlamp | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:22 UTC | 20 Sep 24 18:22 UTC |
| | -p addons-446299 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable nvidia-device-plugin | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | -p addons-446299 | | | | | |
| addons | addons-446299 addons disable | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-446299 addons disable | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | disable cloud-spanner -p | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | addons-446299 | | | | | |
| ssh | addons-446299 ssh cat | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | /opt/local-path-provisioner/pvc-11168afa-d97c-4581-90a8-f19b354e2c35_default_test-pvc/file1 | | | | | |
| addons | addons-446299 addons disable | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:23 UTC | 20 Sep 24 18:23 UTC |
| | addons-446299 | | | | | |
| ip | addons-446299 ip | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
| addons | addons-446299 addons disable | addons-446299 | jenkins | v1.34.0 | 20 Sep 24 18:24 UTC | 20 Sep 24 18:24 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 18:12:45
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 18:12:45.452837 749135 out.go:345] Setting OutFile to fd 1 ...
I0920 18:12:45.452957 749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:12:45.452966 749135 out.go:358] Setting ErrFile to fd 2...
I0920 18:12:45.452970 749135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:12:45.453156 749135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-739831/.minikube/bin
I0920 18:12:45.453777 749135 out.go:352] Setting JSON to false
I0920 18:12:45.454793 749135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6915,"bootTime":1726849050,"procs":270,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0920 18:12:45.454907 749135 start.go:139] virtualization: kvm guest
I0920 18:12:45.457071 749135 out.go:177] * [addons-446299] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0920 18:12:45.458344 749135 out.go:177] - MINIKUBE_LOCATION=19678
I0920 18:12:45.458335 749135 notify.go:220] Checking for updates...
I0920 18:12:45.459761 749135 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 18:12:45.461106 749135 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19678-739831/kubeconfig
I0920 18:12:45.462449 749135 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-739831/.minikube
I0920 18:12:45.463737 749135 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 18:12:45.465084 749135 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 18:12:45.466379 749135 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 18:12:45.497434 749135 out.go:177] * Using the kvm2 driver based on user configuration
I0920 18:12:45.498519 749135 start.go:297] selected driver: kvm2
I0920 18:12:45.498542 749135 start.go:901] validating driver "kvm2" against <nil>
I0920 18:12:45.498561 749135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 18:12:45.499322 749135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:12:45.499411 749135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-739831/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 18:12:45.513921 749135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
I0920 18:12:45.513966 749135 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 18:12:45.514272 749135 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 18:12:45.514314 749135 cni.go:84] Creating CNI manager for ""
I0920 18:12:45.514372 749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 18:12:45.514386 749135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 18:12:45.514458 749135 start.go:340] cluster config:
{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 18:12:45.514600 749135 iso.go:125] acquiring lock: {Name:mk7c8e0c52ea50ffb7ac28fb9347d4f667085c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:12:45.516315 749135 out.go:177] * Starting "addons-446299" primary control-plane node in "addons-446299" cluster
I0920 18:12:45.517423 749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 18:12:45.517447 749135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
I0920 18:12:45.517459 749135 cache.go:56] Caching tarball of preloaded images
I0920 18:12:45.517543 749135 preload.go:172] Found /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0920 18:12:45.517552 749135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
I0920 18:12:45.517857 749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
I0920 18:12:45.517880 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json: {Name:mkaa7e3a2b8a2d95cecdc721e4fd7f5310773e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:12:45.518032 749135 start.go:360] acquireMachinesLock for addons-446299: {Name:mke27b943eaf3105a3a7818ba8cbb5bd07aa92e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 18:12:45.518095 749135 start.go:364] duration metric: took 46.763µs to acquireMachinesLock for "addons-446299"
I0920 18:12:45.518131 749135 start.go:93] Provisioning new machine with config: &{Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0920 18:12:45.518195 749135 start.go:125] createHost starting for "" (driver="kvm2")
I0920 18:12:45.520537 749135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0920 18:12:45.520688 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:12:45.520727 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:12:45.535639 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
I0920 18:12:45.536170 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:12:45.536786 749135 main.go:141] libmachine: Using API Version 1
I0920 18:12:45.536808 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:12:45.537162 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:12:45.537383 749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
I0920 18:12:45.537540 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:12:45.537694 749135 start.go:159] libmachine.API.Create for "addons-446299" (driver="kvm2")
I0920 18:12:45.537726 749135 client.go:168] LocalClient.Create starting
I0920 18:12:45.537791 749135 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem
I0920 18:12:45.635672 749135 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem
I0920 18:12:45.854167 749135 main.go:141] libmachine: Running pre-create checks...
I0920 18:12:45.854195 749135 main.go:141] libmachine: (addons-446299) Calling .PreCreateCheck
I0920 18:12:45.854768 749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
I0920 18:12:45.855238 749135 main.go:141] libmachine: Creating machine...
I0920 18:12:45.855256 749135 main.go:141] libmachine: (addons-446299) Calling .Create
I0920 18:12:45.855444 749135 main.go:141] libmachine: (addons-446299) Creating KVM machine...
I0920 18:12:45.856800 749135 main.go:141] libmachine: (addons-446299) DBG | found existing default KVM network
I0920 18:12:45.857584 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.857437 749157 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
I0920 18:12:45.857661 749135 main.go:141] libmachine: (addons-446299) DBG | created network xml:
I0920 18:12:45.857685 749135 main.go:141] libmachine: (addons-446299) DBG | <network>
I0920 18:12:45.857700 749135 main.go:141] libmachine: (addons-446299) DBG | <name>mk-addons-446299</name>
I0920 18:12:45.857710 749135 main.go:141] libmachine: (addons-446299) DBG | <dns enable='no'/>
I0920 18:12:45.857722 749135 main.go:141] libmachine: (addons-446299) DBG |
I0920 18:12:45.857736 749135 main.go:141] libmachine: (addons-446299) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0920 18:12:45.857749 749135 main.go:141] libmachine: (addons-446299) DBG | <dhcp>
I0920 18:12:45.857762 749135 main.go:141] libmachine: (addons-446299) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0920 18:12:45.857774 749135 main.go:141] libmachine: (addons-446299) DBG | </dhcp>
I0920 18:12:45.857784 749135 main.go:141] libmachine: (addons-446299) DBG | </ip>
I0920 18:12:45.857795 749135 main.go:141] libmachine: (addons-446299) DBG |
I0920 18:12:45.857805 749135 main.go:141] libmachine: (addons-446299) DBG | </network>
I0920 18:12:45.857817 749135 main.go:141] libmachine: (addons-446299) DBG |
I0920 18:12:45.862810 749135 main.go:141] libmachine: (addons-446299) DBG | trying to create private KVM network mk-addons-446299 192.168.39.0/24...
I0920 18:12:45.928127 749135 main.go:141] libmachine: (addons-446299) DBG | private KVM network mk-addons-446299 192.168.39.0/24 created
I0920 18:12:45.928216 749135 main.go:141] libmachine: (addons-446299) Setting up store path in /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
I0920 18:12:45.928243 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:45.928106 749157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-739831/.minikube
I0920 18:12:45.928255 749135 main.go:141] libmachine: (addons-446299) Building disk image from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
I0920 18:12:45.928282 749135 main.go:141] libmachine: (addons-446299) Downloading /home/jenkins/minikube-integration/19678-739831/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-739831/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
I0920 18:12:46.198371 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.198204 749157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa...
I0920 18:12:46.306630 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306482 749157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk...
I0920 18:12:46.306662 749135 main.go:141] libmachine: (addons-446299) DBG | Writing magic tar header
I0920 18:12:46.306673 749135 main.go:141] libmachine: (addons-446299) DBG | Writing SSH key tar header
I0920 18:12:46.306681 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:46.306605 749157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 ...
I0920 18:12:46.306695 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299
I0920 18:12:46.306758 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299 (perms=drwx------)
I0920 18:12:46.306798 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube/machines (perms=drwxr-xr-x)
I0920 18:12:46.306816 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831/.minikube (perms=drwxr-xr-x)
I0920 18:12:46.306825 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube/machines
I0920 18:12:46.306839 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831/.minikube
I0920 18:12:46.306872 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-739831
I0920 18:12:46.306884 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration/19678-739831 (perms=drwxrwxr-x)
I0920 18:12:46.306904 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0920 18:12:46.306929 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0920 18:12:46.306939 749135 main.go:141] libmachine: (addons-446299) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0920 18:12:46.306952 749135 main.go:141] libmachine: (addons-446299) Creating domain...
I0920 18:12:46.306963 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home/jenkins
I0920 18:12:46.306969 749135 main.go:141] libmachine: (addons-446299) DBG | Checking permissions on dir: /home
I0920 18:12:46.306976 749135 main.go:141] libmachine: (addons-446299) DBG | Skipping /home - not owner
I0920 18:12:46.308063 749135 main.go:141] libmachine: (addons-446299) define libvirt domain using xml:
I0920 18:12:46.308090 749135 main.go:141] libmachine: (addons-446299) <domain type='kvm'>
I0920 18:12:46.308100 749135 main.go:141] libmachine: (addons-446299) <name>addons-446299</name>
I0920 18:12:46.308107 749135 main.go:141] libmachine: (addons-446299) <memory unit='MiB'>4000</memory>
I0920 18:12:46.308114 749135 main.go:141] libmachine: (addons-446299) <vcpu>2</vcpu>
I0920 18:12:46.308128 749135 main.go:141] libmachine: (addons-446299) <features>
I0920 18:12:46.308136 749135 main.go:141] libmachine: (addons-446299) <acpi/>
I0920 18:12:46.308144 749135 main.go:141] libmachine: (addons-446299) <apic/>
I0920 18:12:46.308150 749135 main.go:141] libmachine: (addons-446299) <pae/>
I0920 18:12:46.308156 749135 main.go:141] libmachine: (addons-446299)
I0920 18:12:46.308161 749135 main.go:141] libmachine: (addons-446299) </features>
I0920 18:12:46.308167 749135 main.go:141] libmachine: (addons-446299) <cpu mode='host-passthrough'>
I0920 18:12:46.308172 749135 main.go:141] libmachine: (addons-446299)
I0920 18:12:46.308184 749135 main.go:141] libmachine: (addons-446299) </cpu>
I0920 18:12:46.308194 749135 main.go:141] libmachine: (addons-446299) <os>
I0920 18:12:46.308203 749135 main.go:141] libmachine: (addons-446299) <type>hvm</type>
I0920 18:12:46.308221 749135 main.go:141] libmachine: (addons-446299) <boot dev='cdrom'/>
I0920 18:12:46.308234 749135 main.go:141] libmachine: (addons-446299) <boot dev='hd'/>
I0920 18:12:46.308243 749135 main.go:141] libmachine: (addons-446299) <bootmenu enable='no'/>
I0920 18:12:46.308250 749135 main.go:141] libmachine: (addons-446299) </os>
I0920 18:12:46.308255 749135 main.go:141] libmachine: (addons-446299) <devices>
I0920 18:12:46.308262 749135 main.go:141] libmachine: (addons-446299) <disk type='file' device='cdrom'>
I0920 18:12:46.308277 749135 main.go:141] libmachine: (addons-446299) <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/boot2docker.iso'/>
I0920 18:12:46.308290 749135 main.go:141] libmachine: (addons-446299) <target dev='hdc' bus='scsi'/>
I0920 18:12:46.308302 749135 main.go:141] libmachine: (addons-446299) <readonly/>
I0920 18:12:46.308312 749135 main.go:141] libmachine: (addons-446299) </disk>
I0920 18:12:46.308324 749135 main.go:141] libmachine: (addons-446299) <disk type='file' device='disk'>
I0920 18:12:46.308335 749135 main.go:141] libmachine: (addons-446299) <driver name='qemu' type='raw' cache='default' io='threads' />
I0920 18:12:46.308350 749135 main.go:141] libmachine: (addons-446299) <source file='/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/addons-446299.rawdisk'/>
I0920 18:12:46.308364 749135 main.go:141] libmachine: (addons-446299) <target dev='hda' bus='virtio'/>
I0920 18:12:46.308376 749135 main.go:141] libmachine: (addons-446299) </disk>
I0920 18:12:46.308386 749135 main.go:141] libmachine: (addons-446299) <interface type='network'>
I0920 18:12:46.308395 749135 main.go:141] libmachine: (addons-446299) <source network='mk-addons-446299'/>
I0920 18:12:46.308404 749135 main.go:141] libmachine: (addons-446299) <model type='virtio'/>
I0920 18:12:46.308414 749135 main.go:141] libmachine: (addons-446299) </interface>
I0920 18:12:46.308424 749135 main.go:141] libmachine: (addons-446299) <interface type='network'>
I0920 18:12:46.308440 749135 main.go:141] libmachine: (addons-446299) <source network='default'/>
I0920 18:12:46.308454 749135 main.go:141] libmachine: (addons-446299) <model type='virtio'/>
I0920 18:12:46.308462 749135 main.go:141] libmachine: (addons-446299) </interface>
I0920 18:12:46.308467 749135 main.go:141] libmachine: (addons-446299) <serial type='pty'>
I0920 18:12:46.308472 749135 main.go:141] libmachine: (addons-446299) <target port='0'/>
I0920 18:12:46.308478 749135 main.go:141] libmachine: (addons-446299) </serial>
I0920 18:12:46.308486 749135 main.go:141] libmachine: (addons-446299) <console type='pty'>
I0920 18:12:46.308493 749135 main.go:141] libmachine: (addons-446299) <target type='serial' port='0'/>
I0920 18:12:46.308498 749135 main.go:141] libmachine: (addons-446299) </console>
I0920 18:12:46.308504 749135 main.go:141] libmachine: (addons-446299) <rng model='virtio'>
I0920 18:12:46.308512 749135 main.go:141] libmachine: (addons-446299) <backend model='random'>/dev/random</backend>
I0920 18:12:46.308518 749135 main.go:141] libmachine: (addons-446299) </rng>
I0920 18:12:46.308522 749135 main.go:141] libmachine: (addons-446299)
I0920 18:12:46.308528 749135 main.go:141] libmachine: (addons-446299)
I0920 18:12:46.308544 749135 main.go:141] libmachine: (addons-446299) </devices>
I0920 18:12:46.308556 749135 main.go:141] libmachine: (addons-446299) </domain>
I0920 18:12:46.308574 749135 main.go:141] libmachine: (addons-446299)
I0920 18:12:46.314191 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:13:6e:16 in network default
I0920 18:12:46.314696 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:46.314712 749135 main.go:141] libmachine: (addons-446299) Ensuring networks are active...
I0920 18:12:46.315254 749135 main.go:141] libmachine: (addons-446299) Ensuring network default is active
I0920 18:12:46.315494 749135 main.go:141] libmachine: (addons-446299) Ensuring network mk-addons-446299 is active
I0920 18:12:46.315890 749135 main.go:141] libmachine: (addons-446299) Getting domain xml...
I0920 18:12:46.316428 749135 main.go:141] libmachine: (addons-446299) Creating domain...
I0920 18:12:47.702575 749135 main.go:141] libmachine: (addons-446299) Waiting to get IP...
I0920 18:12:47.703586 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:47.704120 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:47.704148 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.704086 749157 retry.go:31] will retry after 271.659022ms: waiting for machine to come up
I0920 18:12:47.977759 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:47.978244 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:47.978271 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:47.978199 749157 retry.go:31] will retry after 286.269777ms: waiting for machine to come up
I0920 18:12:48.265706 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:48.266154 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:48.266176 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.266104 749157 retry.go:31] will retry after 302.528012ms: waiting for machine to come up
I0920 18:12:48.570875 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:48.571362 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:48.571386 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:48.571312 749157 retry.go:31] will retry after 579.846713ms: waiting for machine to come up
I0920 18:12:49.153045 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:49.153478 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:49.153506 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.153418 749157 retry.go:31] will retry after 501.770816ms: waiting for machine to come up
I0920 18:12:49.657032 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:49.657383 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:49.657410 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:49.657355 749157 retry.go:31] will retry after 903.967154ms: waiting for machine to come up
I0920 18:12:50.562781 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:50.563350 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:50.563375 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:50.563286 749157 retry.go:31] will retry after 1.03177474s: waiting for machine to come up
I0920 18:12:51.596424 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:51.596850 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:51.596971 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:51.596890 749157 retry.go:31] will retry after 1.278733336s: waiting for machine to come up
I0920 18:12:52.877368 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:52.877732 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:52.877761 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:52.877690 749157 retry.go:31] will retry after 1.241144447s: waiting for machine to come up
I0920 18:12:54.121228 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:54.121598 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:54.121623 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:54.121564 749157 retry.go:31] will retry after 2.253509148s: waiting for machine to come up
I0920 18:12:56.377139 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:56.377598 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:56.377630 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:56.377537 749157 retry.go:31] will retry after 2.563830681s: waiting for machine to come up
I0920 18:12:58.944264 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:12:58.944679 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:12:58.944723 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:12:58.944624 749157 retry.go:31] will retry after 2.392098661s: waiting for machine to come up
I0920 18:13:01.339634 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:01.340032 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:13:01.340088 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:01.339990 749157 retry.go:31] will retry after 2.800869076s: waiting for machine to come up
I0920 18:13:04.142006 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:04.142476 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find current IP address of domain addons-446299 in network mk-addons-446299
I0920 18:13:04.142500 749135 main.go:141] libmachine: (addons-446299) DBG | I0920 18:13:04.142411 749157 retry.go:31] will retry after 4.101773144s: waiting for machine to come up
I0920 18:13:08.247401 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.247831 749135 main.go:141] libmachine: (addons-446299) Found IP for machine: 192.168.39.237
I0920 18:13:08.247867 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has current primary IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.247875 749135 main.go:141] libmachine: (addons-446299) Reserving static IP address...
I0920 18:13:08.248197 749135 main.go:141] libmachine: (addons-446299) DBG | unable to find host DHCP lease matching {name: "addons-446299", mac: "52:54:00:33:9c:3e", ip: "192.168.39.237"} in network mk-addons-446299
I0920 18:13:08.320366 749135 main.go:141] libmachine: (addons-446299) DBG | Getting to WaitForSSH function...
I0920 18:13:08.320400 749135 main.go:141] libmachine: (addons-446299) Reserved static IP address: 192.168.39.237
I0920 18:13:08.320413 749135 main.go:141] libmachine: (addons-446299) Waiting for SSH to be available...
I0920 18:13:08.323450 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.323840 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.323876 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.324043 749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH client type: external
I0920 18:13:08.324075 749135 main.go:141] libmachine: (addons-446299) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa (-rw-------)
I0920 18:13:08.324116 749135 main.go:141] libmachine: (addons-446299) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa -p 22] /usr/bin/ssh <nil>}
I0920 18:13:08.324134 749135 main.go:141] libmachine: (addons-446299) DBG | About to run SSH command:
I0920 18:13:08.324145 749135 main.go:141] libmachine: (addons-446299) DBG | exit 0
I0920 18:13:08.447247 749135 main.go:141] libmachine: (addons-446299) DBG | SSH cmd err, output: <nil>:
I0920 18:13:08.447526 749135 main.go:141] libmachine: (addons-446299) KVM machine creation complete!
I0920 18:13:08.447847 749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
I0920 18:13:08.448509 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:08.448699 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:08.448836 749135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0920 18:13:08.448855 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:08.450187 749135 main.go:141] libmachine: Detecting operating system of created instance...
I0920 18:13:08.450200 749135 main.go:141] libmachine: Waiting for SSH to be available...
I0920 18:13:08.450206 749135 main.go:141] libmachine: Getting to WaitForSSH function...
I0920 18:13:08.450212 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:08.452411 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.452723 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.452751 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.452850 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:08.453019 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.453174 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.453318 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:08.453492 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:08.453697 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:08.453711 749135 main.go:141] libmachine: About to run SSH command:
exit 0
I0920 18:13:08.550007 749135 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 18:13:08.550034 749135 main.go:141] libmachine: Detecting the provisioner...
I0920 18:13:08.550043 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:08.552709 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.553024 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.553055 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.553193 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:08.553387 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.553523 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.553628 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:08.553820 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:08.554035 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:08.554048 749135 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0920 18:13:08.651415 749135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0920 18:13:08.651508 749135 main.go:141] libmachine: found compatible host: buildroot
I0920 18:13:08.651519 749135 main.go:141] libmachine: Provisioning with buildroot...
I0920 18:13:08.651527 749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
I0920 18:13:08.651799 749135 buildroot.go:166] provisioning hostname "addons-446299"
I0920 18:13:08.651833 749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
I0920 18:13:08.652051 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:08.654630 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.654993 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.655016 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.655142 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:08.655325 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.655472 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.655580 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:08.655728 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:08.655930 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:08.655944 749135 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-446299 && echo "addons-446299" | sudo tee /etc/hostname
I0920 18:13:08.764545 749135 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-446299
I0920 18:13:08.764579 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:08.767492 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.767918 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.767944 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.768198 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:08.768402 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.768591 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:08.768737 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:08.768929 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:08.769151 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:08.769174 749135 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-446299' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-446299/g' /etc/hosts;
else
echo '127.0.1.1 addons-446299' | sudo tee -a /etc/hosts;
fi
fi
I0920 18:13:08.875844 749135 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0920 18:13:08.875886 749135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-739831/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-739831/.minikube}
I0920 18:13:08.875933 749135 buildroot.go:174] setting up certificates
I0920 18:13:08.875949 749135 provision.go:84] configureAuth start
I0920 18:13:08.875963 749135 main.go:141] libmachine: (addons-446299) Calling .GetMachineName
I0920 18:13:08.876262 749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
I0920 18:13:08.878744 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.879098 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.879119 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.879270 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:08.881403 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.881836 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:08.881865 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:08.881970 749135 provision.go:143] copyHostCerts
I0920 18:13:08.882095 749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/ca.pem (1078 bytes)
I0920 18:13:08.882283 749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/cert.pem (1123 bytes)
I0920 18:13:08.882377 749135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-739831/.minikube/key.pem (1679 bytes)
I0920 18:13:08.882472 749135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem org=jenkins.addons-446299 san=[127.0.0.1 192.168.39.237 addons-446299 localhost minikube]
I0920 18:13:09.208189 749135 provision.go:177] copyRemoteCerts
I0920 18:13:09.208279 749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0920 18:13:09.208315 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.211040 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.211327 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.211351 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.211544 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.211780 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.211947 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.212123 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:09.297180 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0920 18:13:09.320798 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0920 18:13:09.344012 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0920 18:13:09.366859 749135 provision.go:87] duration metric: took 490.878212ms to configureAuth
I0920 18:13:09.366893 749135 buildroot.go:189] setting minikube options for container-runtime
I0920 18:13:09.367101 749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:13:09.367184 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.369576 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.369868 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.369896 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.370087 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.370268 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.370416 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.370568 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.370692 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:09.370898 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:09.370918 749135 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0920 18:13:09.580901 749135 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0920 18:13:09.580930 749135 main.go:141] libmachine: Checking connection to Docker...
I0920 18:13:09.580938 749135 main.go:141] libmachine: (addons-446299) Calling .GetURL
I0920 18:13:09.582415 749135 main.go:141] libmachine: (addons-446299) DBG | Using libvirt version 6000000
I0920 18:13:09.584573 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.584892 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.584919 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.585053 749135 main.go:141] libmachine: Docker is up and running!
I0920 18:13:09.585065 749135 main.go:141] libmachine: Reticulating splines...
I0920 18:13:09.585073 749135 client.go:171] duration metric: took 24.047336599s to LocalClient.Create
I0920 18:13:09.585100 749135 start.go:167] duration metric: took 24.047408021s to libmachine.API.Create "addons-446299"
I0920 18:13:09.585116 749135 start.go:293] postStartSetup for "addons-446299" (driver="kvm2")
I0920 18:13:09.585129 749135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 18:13:09.585147 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:09.585408 749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 18:13:09.585435 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.587350 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.587666 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.587695 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.587795 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.587993 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.588132 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.588235 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:09.664940 749135 ssh_runner.go:195] Run: cat /etc/os-release
I0920 18:13:09.669300 749135 info.go:137] Remote host: Buildroot 2023.02.9
I0920 18:13:09.669326 749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/addons for local assets ...
I0920 18:13:09.669399 749135 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-739831/.minikube/files for local assets ...
I0920 18:13:09.669426 749135 start.go:296] duration metric: took 84.302482ms for postStartSetup
I0920 18:13:09.669464 749135 main.go:141] libmachine: (addons-446299) Calling .GetConfigRaw
I0920 18:13:09.670097 749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
I0920 18:13:09.672635 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.673027 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.673059 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.673292 749135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/config.json ...
I0920 18:13:09.673507 749135 start.go:128] duration metric: took 24.155298051s to createHost
I0920 18:13:09.673535 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.675782 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.676085 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.676118 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.676239 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.676425 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.676577 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.676704 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.676850 749135 main.go:141] libmachine: Using SSH client type: native
I0920 18:13:09.677016 749135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0920 18:13:09.677026 749135 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0920 18:13:09.775435 749135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726855989.751621835
I0920 18:13:09.775464 749135 fix.go:216] guest clock: 1726855989.751621835
I0920 18:13:09.775474 749135 fix.go:229] Guest: 2024-09-20 18:13:09.751621835 +0000 UTC Remote: 2024-09-20 18:13:09.673520947 +0000 UTC m=+24.255782208 (delta=78.100888ms)
I0920 18:13:09.775526 749135 fix.go:200] guest clock delta is within tolerance: 78.100888ms
I0920 18:13:09.775540 749135 start.go:83] releasing machines lock for "addons-446299", held for 24.257428579s
I0920 18:13:09.775567 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:09.775862 749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
I0920 18:13:09.778659 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.779012 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.779037 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.779220 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:09.779691 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:09.779841 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:09.779938 749135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0920 18:13:09.779984 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.780090 749135 ssh_runner.go:195] Run: cat /version.json
I0920 18:13:09.780115 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:09.782348 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.782682 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.782703 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.782721 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.782827 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.783033 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.783120 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:09.783141 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:09.783235 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.783325 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:09.783381 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:09.783467 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:09.783589 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:09.783728 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:09.855541 749135 ssh_runner.go:195] Run: systemctl --version
I0920 18:13:09.885114 749135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0920 18:13:10.038473 749135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0920 18:13:10.044604 749135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0920 18:13:10.044673 749135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 18:13:10.061773 749135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0920 18:13:10.061802 749135 start.go:495] detecting cgroup driver to use...
I0920 18:13:10.061871 749135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0920 18:13:10.078163 749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0920 18:13:10.092123 749135 docker.go:217] disabling cri-docker service (if available) ...
I0920 18:13:10.092186 749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0920 18:13:10.105354 749135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0920 18:13:10.118581 749135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0920 18:13:10.228500 749135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0920 18:13:10.385243 749135 docker.go:233] disabling docker service ...
I0920 18:13:10.385317 749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0920 18:13:10.399346 749135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0920 18:13:10.411799 749135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0920 18:13:10.532538 749135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0920 18:13:10.657590 749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0920 18:13:10.672417 749135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0920 18:13:10.690910 749135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0920 18:13:10.690989 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.701918 749135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0920 18:13:10.702004 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.712909 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.723847 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.734707 749135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 18:13:10.745859 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.756720 749135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.781698 749135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0920 18:13:10.792301 749135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 18:13:10.801512 749135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0920 18:13:10.801614 749135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0920 18:13:10.815061 749135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 18:13:10.824568 749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 18:13:10.942263 749135 ssh_runner.go:195] Run: sudo systemctl restart crio
I0920 18:13:11.344964 749135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0920 18:13:11.345085 749135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0920 18:13:11.350594 749135 start.go:563] Will wait 60s for crictl version
I0920 18:13:11.350677 749135 ssh_runner.go:195] Run: which crictl
I0920 18:13:11.354600 749135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0920 18:13:11.392003 749135 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0920 18:13:11.392112 749135 ssh_runner.go:195] Run: crio --version
I0920 18:13:11.424468 749135 ssh_runner.go:195] Run: crio --version
I0920 18:13:11.468344 749135 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
I0920 18:13:11.469889 749135 main.go:141] libmachine: (addons-446299) Calling .GetIP
I0920 18:13:11.472633 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:11.472955 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:11.472986 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:11.473236 749135 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0920 18:13:11.477639 749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 18:13:11.490126 749135 kubeadm.go:883] updating cluster {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 18:13:11.490246 749135 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 18:13:11.490303 749135 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:13:11.522179 749135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
I0920 18:13:11.522257 749135 ssh_runner.go:195] Run: which lz4
I0920 18:13:11.526368 749135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0920 18:13:11.530534 749135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0920 18:13:11.530569 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
I0920 18:13:12.754100 749135 crio.go:462] duration metric: took 1.227762585s to copy over tarball
I0920 18:13:12.754195 749135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0920 18:13:14.814758 749135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060523421s)
I0920 18:13:14.814798 749135 crio.go:469] duration metric: took 2.06066428s to extract the tarball
I0920 18:13:14.814808 749135 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0920 18:13:14.850931 749135 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 18:13:14.892855 749135 crio.go:514] all images are preloaded for cri-o runtime.
I0920 18:13:14.892884 749135 cache_images.go:84] Images are preloaded, skipping loading
I0920 18:13:14.892894 749135 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.31.1 crio true true} ...
I0920 18:13:14.893002 749135 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-446299 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0920 18:13:14.893069 749135 ssh_runner.go:195] Run: crio config
I0920 18:13:14.935948 749135 cni.go:84] Creating CNI manager for ""
I0920 18:13:14.935974 749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 18:13:14.935987 749135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 18:13:14.936010 749135 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-446299 NodeName:addons-446299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 18:13:14.936153 749135 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.237
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-446299"
kubeletExtraArgs:
node-ip: 192.168.39.237
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 18:13:14.936224 749135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 18:13:14.945879 749135 binaries.go:44] Found k8s binaries, skipping transfer
I0920 18:13:14.945951 749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 18:13:14.955112 749135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0920 18:13:14.971443 749135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 18:13:14.987494 749135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
I0920 18:13:15.004128 749135 ssh_runner.go:195] Run: grep 192.168.39.237 control-plane.minikube.internal$ /etc/hosts
I0920 18:13:15.008311 749135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0920 18:13:15.020386 749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 18:13:15.143207 749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 18:13:15.160928 749135 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299 for IP: 192.168.39.237
I0920 18:13:15.160952 749135 certs.go:194] generating shared ca certs ...
I0920 18:13:15.160971 749135 certs.go:226] acquiring lock for ca certs: {Name:mkf559981e1ff96dd3b092845a7637f34a653668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.161127 749135 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key
I0920 18:13:15.288325 749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt ...
I0920 18:13:15.288359 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt: {Name:mkd07e710befe398f359697123be87266dbb73cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.288526 749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key ...
I0920 18:13:15.288537 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key: {Name:mk8452559729a4e6fe54cdcaa3db5cb2d03b365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.288610 749135 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key
I0920 18:13:15.460720 749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt ...
I0920 18:13:15.460749 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt: {Name:mkd5912367400d11fe28d50162d9491c1c026ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.460926 749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key ...
I0920 18:13:15.460946 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key: {Name:mk7b4a10567303413b299060d87451a86c82a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.461047 749135 certs.go:256] generating profile certs ...
I0920 18:13:15.461131 749135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key
I0920 18:13:15.461148 749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt with IP's: []
I0920 18:13:15.666412 749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt ...
I0920 18:13:15.666455 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.crt: {Name:mkef01489d7dcf2bfb46ac5af11bed50283fb691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.666668 749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key ...
I0920 18:13:15.666687 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/client.key: {Name:mkce7236a454e2c0202c83ef853c169198fb2f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.666791 749135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387
I0920 18:13:15.666816 749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
I0920 18:13:15.705625 749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 ...
I0920 18:13:15.705654 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387: {Name:mk64bf6bb73ff35990c8781efc3d30626dc3ca21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.705826 749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 ...
I0920 18:13:15.705843 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387: {Name:mk18ead88f15a69013b31853d623fd0cb8c39466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.705941 749135 certs.go:381] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt
I0920 18:13:15.706040 749135 certs.go:385] copying /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key.77016387 -> /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key
I0920 18:13:15.706114 749135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key
I0920 18:13:15.706140 749135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt with IP's: []
I0920 18:13:15.788260 749135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt ...
I0920 18:13:15.788293 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt: {Name:mk5ff8fc31363db98a0f0ca7278de49be24b8420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.788475 749135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key ...
I0920 18:13:15.788494 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key: {Name:mk7a90a72aaffce450a2196a523cb38d8ddfd4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:15.788714 749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca-key.pem (1675 bytes)
I0920 18:13:15.788762 749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/ca.pem (1078 bytes)
I0920 18:13:15.788796 749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/cert.pem (1123 bytes)
I0920 18:13:15.788835 749135 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-739831/.minikube/certs/key.pem (1679 bytes)
I0920 18:13:15.789513 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 18:13:15.814280 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0920 18:13:15.838979 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 18:13:15.861251 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0920 18:13:15.883772 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0920 18:13:15.906899 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0920 18:13:15.930055 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 18:13:15.952960 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/profiles/addons-446299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0920 18:13:15.976078 749135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-739831/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 18:13:15.998990 749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 18:13:16.015378 749135 ssh_runner.go:195] Run: openssl version
I0920 18:13:16.021288 749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 18:13:16.031743 749135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 18:13:16.036218 749135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:13 /usr/share/ca-certificates/minikubeCA.pem
I0920 18:13:16.036292 749135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 18:13:16.041983 749135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 18:13:16.052410 749135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 18:13:16.056509 749135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 18:13:16.056561 749135 kubeadm.go:392] StartCluster: {Name:addons-446299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-446299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 18:13:16.056643 749135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0920 18:13:16.056724 749135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0920 18:13:16.093233 749135 cri.go:89] found id: ""
I0920 18:13:16.093305 749135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 18:13:16.103183 749135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 18:13:16.112220 749135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 18:13:16.121055 749135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 18:13:16.121076 749135 kubeadm.go:157] found existing configuration files:
I0920 18:13:16.121125 749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 18:13:16.129727 749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 18:13:16.129793 749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 18:13:16.138769 749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 18:13:16.147343 749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 18:13:16.147401 749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 18:13:16.156084 749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 18:13:16.164356 749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 18:13:16.164409 749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 18:13:16.172957 749135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 18:13:16.181269 749135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 18:13:16.181319 749135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 18:13:16.189971 749135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0920 18:13:16.241816 749135 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 18:13:16.242023 749135 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 18:13:16.343705 749135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 18:13:16.343865 749135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 18:13:16.344016 749135 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 18:13:16.353422 749135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 18:13:16.356505 749135 out.go:235] - Generating certificates and keys ...
I0920 18:13:16.356621 749135 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 18:13:16.356707 749135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 18:13:16.567905 749135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 18:13:16.678138 749135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 18:13:16.903150 749135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 18:13:17.220781 749135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 18:13:17.330970 749135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 18:13:17.331262 749135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I0920 18:13:17.404562 749135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 18:13:17.404723 749135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-446299 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I0920 18:13:17.558748 749135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 18:13:17.723982 749135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 18:13:17.850510 749135 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 18:13:17.850712 749135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 18:13:17.910185 749135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 18:13:18.072173 749135 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 18:13:18.135494 749135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 18:13:18.547143 749135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 18:13:18.760484 749135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 18:13:18.761203 749135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 18:13:18.765007 749135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 18:13:18.801126 749135 out.go:235] - Booting up control plane ...
I0920 18:13:18.801251 749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 18:13:18.801344 749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 18:13:18.801424 749135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 18:13:18.801571 749135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 18:13:18.801721 749135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 18:13:18.801785 749135 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 18:13:18.927609 749135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 18:13:18.927774 749135 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 18:13:19.928576 749135 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001817815s
I0920 18:13:19.928734 749135 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 18:13:24.427415 749135 kubeadm.go:310] [api-check] The API server is healthy after 4.501490258s
I0920 18:13:24.439460 749135 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 18:13:24.456660 749135 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 18:13:24.489726 749135 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 18:13:24.489974 749135 kubeadm.go:310] [mark-control-plane] Marking the node addons-446299 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 18:13:24.502419 749135 kubeadm.go:310] [bootstrap-token] Using token: 2qbco4.c4cth5cwyyzw51bf
I0920 18:13:24.503870 749135 out.go:235] - Configuring RBAC rules ...
I0920 18:13:24.504029 749135 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 18:13:24.514334 749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 18:13:24.520831 749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 18:13:24.524418 749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 18:13:24.527658 749135 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 18:13:24.533751 749135 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 18:13:24.833210 749135 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 18:13:25.263206 749135 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 18:13:25.833304 749135 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 18:13:25.834184 749135 kubeadm.go:310]
I0920 18:13:25.834298 749135 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 18:13:25.834327 749135 kubeadm.go:310]
I0920 18:13:25.834438 749135 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 18:13:25.834450 749135 kubeadm.go:310]
I0920 18:13:25.834490 749135 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 18:13:25.834595 749135 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 18:13:25.834657 749135 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 18:13:25.834674 749135 kubeadm.go:310]
I0920 18:13:25.834745 749135 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 18:13:25.834754 749135 kubeadm.go:310]
I0920 18:13:25.834980 749135 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 18:13:25.834997 749135 kubeadm.go:310]
I0920 18:13:25.835059 749135 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 18:13:25.835163 749135 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 18:13:25.835253 749135 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 18:13:25.835263 749135 kubeadm.go:310]
I0920 18:13:25.835376 749135 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 18:13:25.835483 749135 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 18:13:25.835490 749135 kubeadm.go:310]
I0920 18:13:25.835595 749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
I0920 18:13:25.835757 749135 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d \
I0920 18:13:25.835806 749135 kubeadm.go:310] --control-plane
I0920 18:13:25.835816 749135 kubeadm.go:310]
I0920 18:13:25.835914 749135 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 18:13:25.835926 749135 kubeadm.go:310]
I0920 18:13:25.836021 749135 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2qbco4.c4cth5cwyyzw51bf \
I0920 18:13:25.836149 749135 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:947ef21afc8104efa9fe7e5dbe397ab7540e2665a521761d784eb9c9d11b061d
I0920 18:13:25.837593 749135 kubeadm.go:310] W0920 18:13:16.222475 810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 18:13:25.837868 749135 kubeadm.go:310] W0920 18:13:16.223486 810 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 18:13:25.837990 749135 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 18:13:25.838019 749135 cni.go:84] Creating CNI manager for ""
I0920 18:13:25.838028 749135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0920 18:13:25.839751 749135 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 18:13:25.840949 749135 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0920 18:13:25.852783 749135 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 18:13:25.871921 749135 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 18:13:25.871998 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:25.872010 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-446299 minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-446299 minikube.k8s.io/primary=true
I0920 18:13:25.893378 749135 ops.go:34] apiserver oom_adj: -16
I0920 18:13:26.025723 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:26.526635 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:27.026038 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:27.526100 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:28.026195 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:28.526494 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:29.026560 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:29.526369 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:30.026015 749135 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 18:13:30.116670 749135 kubeadm.go:1113] duration metric: took 4.244739753s to wait for elevateKubeSystemPrivileges
I0920 18:13:30.116706 749135 kubeadm.go:394] duration metric: took 14.06015239s to StartCluster
I0920 18:13:30.116726 749135 settings.go:142] acquiring lock: {Name:mk0bd1e421bf437575c076c52c1ff2f74497a1ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:30.116861 749135 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19678-739831/kubeconfig
I0920 18:13:30.117227 749135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-739831/kubeconfig: {Name:mk275c54cf52b0ccdc22fcaa39c7b9c31092c648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 18:13:30.117422 749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 18:13:30.117448 749135 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0920 18:13:30.117512 749135 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 18:13:30.117640 749135 addons.go:69] Setting yakd=true in profile "addons-446299"
I0920 18:13:30.117667 749135 addons.go:234] Setting addon yakd=true in "addons-446299"
I0920 18:13:30.117700 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.117727 749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:13:30.117688 749135 addons.go:69] Setting default-storageclass=true in profile "addons-446299"
I0920 18:13:30.117804 749135 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-446299"
I0920 18:13:30.117694 749135 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-446299"
I0920 18:13:30.117828 749135 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-446299"
I0920 18:13:30.117867 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.117708 749135 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-446299"
I0920 18:13:30.117998 749135 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-446299"
I0920 18:13:30.117714 749135 addons.go:69] Setting inspektor-gadget=true in profile "addons-446299"
I0920 18:13:30.118028 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.118044 749135 addons.go:234] Setting addon inspektor-gadget=true in "addons-446299"
I0920 18:13:30.118082 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.117716 749135 addons.go:69] Setting gcp-auth=true in profile "addons-446299"
I0920 18:13:30.118200 749135 mustload.go:65] Loading cluster: addons-446299
I0920 18:13:30.118199 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118219 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.117703 749135 addons.go:69] Setting ingress-dns=true in profile "addons-446299"
I0920 18:13:30.118237 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.118242 749135 addons.go:234] Setting addon ingress-dns=true in "addons-446299"
I0920 18:13:30.118250 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.118270 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.118376 749135 config.go:182] Loaded profile config "addons-446299": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 18:13:30.118380 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118401 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.118492 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118530 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.118647 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118678 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.117720 749135 addons.go:69] Setting metrics-server=true in profile "addons-446299"
I0920 18:13:30.118748 749135 addons.go:234] Setting addon metrics-server=true in "addons-446299"
I0920 18:13:30.118777 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.118823 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118831 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.118883 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.118889 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.117726 749135 addons.go:69] Setting ingress=true in profile "addons-446299"
I0920 18:13:30.119096 749135 addons.go:234] Setting addon ingress=true in "addons-446299"
I0920 18:13:30.119137 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.117736 749135 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-446299"
I0920 18:13:30.119353 749135 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-446299"
I0920 18:13:30.119501 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.119521 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.119740 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.117735 749135 addons.go:69] Setting registry=true in profile "addons-446299"
I0920 18:13:30.119761 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.119766 749135 addons.go:234] Setting addon registry=true in "addons-446299"
I0920 18:13:30.119795 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.120169 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.120211 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.117735 749135 addons.go:69] Setting cloud-spanner=true in profile "addons-446299"
I0920 18:13:30.120247 749135 addons.go:234] Setting addon cloud-spanner=true in "addons-446299"
I0920 18:13:30.117743 749135 addons.go:69] Setting volcano=true in profile "addons-446299"
I0920 18:13:30.120264 749135 addons.go:234] Setting addon volcano=true in "addons-446299"
I0920 18:13:30.120292 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.120352 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.117744 749135 addons.go:69] Setting storage-provisioner=true in profile "addons-446299"
I0920 18:13:30.120495 749135 addons.go:234] Setting addon storage-provisioner=true in "addons-446299"
I0920 18:13:30.120536 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.120768 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.120790 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.117753 749135 addons.go:69] Setting volumesnapshots=true in profile "addons-446299"
I0920 18:13:30.120925 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.120933 749135 addons.go:234] Setting addon volumesnapshots=true in "addons-446299"
I0920 18:13:30.120955 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.120966 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.122929 749135 out.go:177] * Verifying Kubernetes components...
I0920 18:13:30.124310 749135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0920 18:13:30.139606 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
I0920 18:13:30.139626 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43313
I0920 18:13:30.139664 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
I0920 18:13:30.139664 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35171
I0920 18:13:30.151212 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
I0920 18:13:30.151245 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.151251 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
I0920 18:13:30.151274 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.151393 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.151405 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.151438 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.151856 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.151891 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.152064 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.152188 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.152245 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.152411 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.152423 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.152487 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.152534 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.152664 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.152678 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.152736 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.152850 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.152861 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.152984 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.152995 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.153048 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.153483 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.153515 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.154013 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
I0920 18:13:30.154291 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.154314 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.154382 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.154805 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.154867 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.155632 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.155794 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.155815 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.155882 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.156284 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.156326 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.159168 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.159296 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.159618 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.159652 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.159773 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.159808 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.160117 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.160143 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.160217 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.160647 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.161813 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.161856 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.164600 749135 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-446299"
I0920 18:13:30.164649 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.165039 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.165072 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.176807 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
I0920 18:13:30.177469 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.178091 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.178111 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.178583 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.179242 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.179271 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.185984 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
I0920 18:13:30.186586 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.187123 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.187144 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.187554 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.188160 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.188203 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.193206 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
I0920 18:13:30.193417 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
I0920 18:13:30.193849 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.194099 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.194452 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.194471 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.194968 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.195118 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.195132 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.195349 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
I0920 18:13:30.195438 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.196077 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.196556 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.196580 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.197033 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.197694 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.197734 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.197960 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
I0920 18:13:30.198500 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.198621 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.198726 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
I0920 18:13:30.198876 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.199030 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.199369 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.199385 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.199416 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.199438 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.199710 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.200318 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.200362 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.200438 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.201288 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.201893 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.201916 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.203229 749135 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 18:13:30.204746 749135 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 18:13:30.204766 749135 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 18:13:30.204788 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.206295 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.206675 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.207700 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
I0920 18:13:30.208147 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.208668 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.208691 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.209400 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.209672 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.209714 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.210328 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.210357 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.210920 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.210948 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.211140 749135 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 18:13:30.211638 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.212145 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.212323 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.212494 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.212630 749135 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 18:13:30.212646 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 18:13:30.212664 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.213593 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
I0920 18:13:30.214660 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
I0920 18:13:30.215405 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.215903 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.215924 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.216384 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.216437 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.216507 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.216537 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.216592 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
I0920 18:13:30.217041 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.217047 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.217305 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.217448 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.217585 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.218334 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.218356 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.218795 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.219018 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.219181 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
I0920 18:13:30.219880 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.219925 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.219979 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.220067 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.220460 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.220482 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.220702 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.220722 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.220787 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
I0920 18:13:30.221095 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.221183 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.221329 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.221386 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:30.221397 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:30.223334 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.223352 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.223398 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:30.223412 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:30.223419 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:30.223427 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:30.223433 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:30.223529 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.224012 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:30.224041 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:30.224048 749135 main.go:141] libmachine: Making call to close connection to plugin binary
W0920 18:13:30.224154 749135 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0920 18:13:30.224543 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
I0920 18:13:30.225486 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.225509 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.226183 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.226202 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.226560 749135 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 18:13:30.226986 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.227285 749135 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0920 18:13:30.227644 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.227684 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.228253 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
I0920 18:13:30.228649 749135 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 18:13:30.228675 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0920 18:13:30.228697 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.229313 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
I0920 18:13:30.229673 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.230049 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 18:13:30.230142 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.230158 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.230485 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.230672 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.231280 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.231806 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
I0920 18:13:30.231963 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.231988 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.232145 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.232332 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.232428 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.232440 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 18:13:30.232482 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.232696 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.233542 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.233796 749135 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 18:13:30.234419 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.234438 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.234783 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 18:13:30.235010 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.235348 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.236127 749135 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0920 18:13:30.236900 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
I0920 18:13:30.237440 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 18:13:30.237599 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
I0920 18:13:30.238719 749135 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 18:13:30.239949 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 18:13:30.240129 749135 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0920 18:13:30.240146 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0920 18:13:30.240162 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.242347 749135 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 18:13:30.243261 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.243644 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.243673 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.243908 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.244083 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.244194 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.244349 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.244407 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
I0920 18:13:30.244610 749135 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 18:13:30.245914 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 18:13:30.245941 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 18:13:30.245963 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.246673 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
I0920 18:13:30.247429 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.247556 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.247990 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.248061 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.248074 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.248079 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.248343 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.248449 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.248449 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.248468 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.248596 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.248607 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.248648 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.248833 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.249170 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.249280 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.249352 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.249393 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.249409 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.250084 749135 addons.go:234] Setting addon default-storageclass=true in "addons-446299"
I0920 18:13:30.250124 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:30.250508 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.250532 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.251170 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.251192 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.251274 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.251488 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.251857 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.251862 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.251910 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.251940 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.252078 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.252212 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.252224 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.252440 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.252553 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.252748 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.252820 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.252833 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.253735 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.253941 749135 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 18:13:30.254017 749135 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0920 18:13:30.253980 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.254455 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.254656 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.254870 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.254873 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.255177 749135 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 18:13:30.255187 749135 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 18:13:30.255205 749135 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 18:13:30.255226 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.255274 749135 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 18:13:30.255278 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.255288 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 18:13:30.255303 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.256466 749135 out.go:177] - Using image docker.io/registry:2.8.3
I0920 18:13:30.256532 749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 18:13:30.256552 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 18:13:30.256570 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.258154 749135 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 18:13:30.259159 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 18:13:30.259174 749135 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 18:13:30.259188 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.259235 749135 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 18:13:30.260368 749135 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 18:13:30.260382 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 18:13:30.260394 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.260519 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.260844 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.260873 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.261038 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.261196 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.262948 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.263013 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.263033 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.263050 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.263161 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.263545 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.263701 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.264179 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.264417 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.264628 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.265340 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.265500 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.265732 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.265751 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.266060 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.266249 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.266266 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.266441 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.266593 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.266625 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.266670 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.266742 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.267063 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.267118 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.267232 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.267247 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.267357 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.267382 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.267549 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.267839 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.269511 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
I0920 18:13:30.269878 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.270901 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.270926 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.271296 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.271468 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.273221 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.274917 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
I0920 18:13:30.275136 749135 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 18:13:30.275446 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.276076 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.276096 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.276414 749135 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 18:13:30.276440 749135 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 18:13:30.276461 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.276501 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.276736 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.278674 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.280057 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.280316 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.280342 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.280375 749135 out.go:177] - Using image docker.io/busybox:stable
I0920 18:13:30.280530 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.280706 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.280828 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.280961 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
W0920 18:13:30.281845 749135 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
I0920 18:13:30.281937 749135 retry.go:31] will retry after 148.234221ms: ssh: handshake failed: read tcp 192.168.39.1:35600->192.168.39.237:22: read: connection reset by peer
I0920 18:13:30.282766 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
I0920 18:13:30.282794 749135 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 18:13:30.283193 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.283743 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.283764 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.284120 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.284286 749135 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 18:13:30.284302 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 18:13:30.284319 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.284696 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:30.284848 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:30.290962 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.290998 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.291015 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.291035 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.291443 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.291607 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.291761 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.301013 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
I0920 18:13:30.301540 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:30.302060 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:30.302090 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:30.302449 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:30.302621 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:30.303997 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:30.304220 749135 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 18:13:30.304236 749135 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 18:13:30.304256 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:30.307237 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.307715 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:30.307749 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:30.307899 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:30.308079 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:30.308237 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:30.308392 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:30.604495 749135 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0920 18:13:30.604525 749135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 18:13:30.661112 749135 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 18:13:30.661146 749135 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 18:13:30.662437 749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 18:13:30.662469 749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 18:13:30.705589 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0920 18:13:30.750149 749135 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 18:13:30.750187 749135 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 18:13:30.753172 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 18:13:30.755196 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 18:13:30.771513 749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 18:13:30.771540 749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 18:13:30.797810 749135 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 18:13:30.797835 749135 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 18:13:30.807101 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 18:13:30.868448 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 18:13:30.869944 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 18:13:30.869963 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 18:13:30.871146 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 18:13:30.896462 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0920 18:13:30.900930 749135 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 18:13:30.900959 749135 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 18:13:30.906831 749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 18:13:30.906880 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 18:13:30.933744 749135 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 18:13:30.933774 749135 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 18:13:30.969038 749135 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 18:13:30.969076 749135 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 18:13:31.000321 749135 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 18:13:31.000354 749135 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 18:13:31.182228 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 18:13:31.182256 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 18:13:31.198470 749135 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 18:13:31.198506 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 18:13:31.232002 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 18:13:31.232027 749135 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 18:13:31.241138 749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 18:13:31.241162 749135 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 18:13:31.303359 749135 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 18:13:31.303389 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 18:13:31.308659 749135 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 18:13:31.308686 749135 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 18:13:31.411918 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 18:13:31.444332 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 18:13:31.444368 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 18:13:31.517643 749135 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 18:13:31.517669 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 18:13:31.522528 749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 18:13:31.522555 749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 18:13:31.527932 749135 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 18:13:31.527961 749135 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 18:13:31.598680 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 18:13:31.753266 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 18:13:31.753305 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 18:13:31.825090 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 18:13:31.868789 749135 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 18:13:31.868821 749135 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 18:13:31.871872 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 18:13:32.035165 749135 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 18:13:32.035205 749135 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 18:13:32.325034 749135 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 18:13:32.325068 749135 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 18:13:32.426301 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 18:13:32.426330 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 18:13:32.734227 749135 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 18:13:32.734252 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 18:13:32.776162 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 18:13:32.776201 749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 18:13:32.973816 749135 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369238207s)
I0920 18:13:32.973844 749135 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.369303036s)
I0920 18:13:32.973868 749135 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0920 18:13:32.974717 749135 node_ready.go:35] waiting up to 6m0s for node "addons-446299" to be "Ready" ...
I0920 18:13:32.978640 749135 node_ready.go:49] node "addons-446299" has status "Ready":"True"
I0920 18:13:32.978660 749135 node_ready.go:38] duration metric: took 3.921107ms for node "addons-446299" to be "Ready" ...
I0920 18:13:32.978672 749135 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 18:13:32.990987 749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
I0920 18:13:33.092955 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 18:13:33.125330 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 18:13:33.125357 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 18:13:33.271505 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 18:13:33.271534 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 18:13:33.497723 749135 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-446299" context rescaled to 1 replicas
I0920 18:13:33.600812 749135 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 18:13:33.600847 749135 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 18:13:33.656016 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.902807697s)
I0920 18:13:33.656075 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656075 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.900839477s)
I0920 18:13:33.656016 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.950386811s)
I0920 18:13:33.656109 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656121 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.656127 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656090 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.656146 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.656567 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:33.656587 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:33.656608 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:33.656624 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.656627 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.656653 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.656665 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656676 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.656635 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.656718 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656637 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.656744 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.656760 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:33.656767 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.656730 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:33.657076 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.657118 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:33.657119 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.657096 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.657156 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.657263 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:33.657279 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:33.758218 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 18:13:35.015799 749135 pod_ready.go:103] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:35.494820 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.687683083s)
I0920 18:13:35.494889 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.494891 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626405857s)
I0920 18:13:35.494920 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.494932 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.494930 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.623755287s)
I0920 18:13:35.494950 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.494983 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.495052 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.495370 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.495388 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.495396 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.495404 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.496899 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:35.496907 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:35.496907 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:35.496946 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.496958 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.496966 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.496977 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.496990 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.496999 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.497065 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.497077 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.497089 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.497098 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.497258 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.497276 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.498278 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:35.498290 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.498301 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.545445 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.545475 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.545718 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.545745 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.545752 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
W0920 18:13:35.545859 749135 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0920 18:13:35.559802 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:35.559831 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:35.560074 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:35.560092 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:35.560108 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:36.023603 749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.023630 749135 pod_ready.go:82] duration metric: took 3.032619357s for pod "coredns-7c65d6cfc9-8b5fx" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.023643 749135 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.059659 749135 pod_ready.go:93] pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.059693 749135 pod_ready.go:82] duration metric: took 36.040161ms for pod "coredns-7c65d6cfc9-tfngl" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.059705 749135 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.075393 749135 pod_ready.go:93] pod "etcd-addons-446299" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.075428 749135 pod_ready.go:82] duration metric: took 15.714418ms for pod "etcd-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.075441 749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.089509 749135 pod_ready.go:93] pod "kube-apiserver-addons-446299" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.089536 749135 pod_ready.go:82] duration metric: took 14.086774ms for pod "kube-apiserver-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.089546 749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.600534 749135 pod_ready.go:93] pod "kube-controller-manager-addons-446299" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.600565 749135 pod_ready.go:82] duration metric: took 511.011851ms for pod "kube-controller-manager-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.600579 749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.797080 749135 pod_ready.go:93] pod "kube-proxy-9pcgb" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:36.797111 749135 pod_ready.go:82] duration metric: took 196.523175ms for pod "kube-proxy-9pcgb" in "kube-system" namespace to be "Ready" ...
I0920 18:13:36.797123 749135 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:37.195153 749135 pod_ready.go:93] pod "kube-scheduler-addons-446299" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:37.195185 749135 pod_ready.go:82] duration metric: took 398.053895ms for pod "kube-scheduler-addons-446299" in "kube-system" namespace to be "Ready" ...
I0920 18:13:37.195198 749135 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
I0920 18:13:37.260708 749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 18:13:37.260749 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:37.264035 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:37.264543 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:37.264579 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:37.264739 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:37.264958 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:37.265141 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:37.265285 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:37.472764 749135 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 18:13:37.656998 749135 addons.go:234] Setting addon gcp-auth=true in "addons-446299"
I0920 18:13:37.657072 749135 host.go:66] Checking if "addons-446299" exists ...
I0920 18:13:37.657494 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:37.657545 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:37.673709 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
I0920 18:13:37.674398 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:37.674958 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:37.674981 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:37.675363 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:37.675843 749135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 18:13:37.675888 749135 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:13:37.691444 749135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
I0920 18:13:37.692042 749135 main.go:141] libmachine: () Calling .GetVersion
I0920 18:13:37.692560 749135 main.go:141] libmachine: Using API Version 1
I0920 18:13:37.692593 749135 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:13:37.693006 749135 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:13:37.693249 749135 main.go:141] libmachine: (addons-446299) Calling .GetState
I0920 18:13:37.695166 749135 main.go:141] libmachine: (addons-446299) Calling .DriverName
I0920 18:13:37.695451 749135 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 18:13:37.695481 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHHostname
I0920 18:13:37.698450 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:37.698921 749135 main.go:141] libmachine: (addons-446299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9c:3e", ip: ""} in network mk-addons-446299: {Iface:virbr1 ExpiryTime:2024-09-20 19:13:00 +0000 UTC Type:0 Mac:52:54:00:33:9c:3e Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-446299 Clientid:01:52:54:00:33:9c:3e}
I0920 18:13:37.698953 749135 main.go:141] libmachine: (addons-446299) DBG | domain addons-446299 has defined IP address 192.168.39.237 and MAC address 52:54:00:33:9c:3e in network mk-addons-446299
I0920 18:13:37.699128 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHPort
I0920 18:13:37.699312 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHKeyPath
I0920 18:13:37.699441 749135 main.go:141] libmachine: (addons-446299) Calling .GetSSHUsername
I0920 18:13:37.699604 749135 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-739831/.minikube/machines/addons-446299/id_rsa Username:docker}
I0920 18:13:38.819493 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.922986564s)
I0920 18:13:38.819541 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.407583803s)
I0920 18:13:38.819575 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.819591 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.819607 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.819648 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.220925429s)
I0920 18:13:38.819598 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.819686 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.819705 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.819778 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.994650356s)
W0920 18:13:38.819815 749135 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 18:13:38.819840 749135 retry.go:31] will retry after 365.705658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 18:13:38.819845 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.947942371s)
I0920 18:13:38.819873 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.819885 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.819961 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.726965652s)
I0920 18:13:38.820001 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.820012 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.820227 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.820244 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.820285 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.820295 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.820413 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.820433 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.820460 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.820467 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.820475 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.820481 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.820629 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.820639 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.820647 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.820655 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.820718 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.820773 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.820781 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.820789 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.820795 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.821299 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.821316 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.821349 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.821355 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.821365 749135 addons.go:475] Verifying addon registry=true in "addons-446299"
I0920 18:13:38.821906 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.821917 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.821926 749135 addons.go:475] Verifying addon ingress=true in "addons-446299"
I0920 18:13:38.821997 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.822026 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.822038 749135 addons.go:475] Verifying addon metrics-server=true in "addons-446299"
I0920 18:13:38.822070 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.822084 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.822092 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:38.822100 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:38.822128 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.822143 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.822495 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:38.822542 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:38.822551 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:38.823406 749135 out.go:177] * Verifying ingress addon...
I0920 18:13:38.823868 749135 out.go:177] * Verifying registry addon...
I0920 18:13:38.824871 749135 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-446299 service yakd-dashboard -n yakd-dashboard
I0920 18:13:38.825597 749135 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0920 18:13:38.826680 749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 18:13:38.844205 749135 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0920 18:13:38.844236 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:38.850356 749135 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 18:13:38.850383 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:39.186375 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 18:13:39.200878 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:39.330411 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:39.330769 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:39.849376 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:39.851690 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:40.361850 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:40.362230 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:41.034778 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:41.035000 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:41.038162 749135 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.342687523s)
I0920 18:13:41.038403 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.280132041s)
I0920 18:13:41.038461 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:41.038481 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:41.038819 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:41.038884 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:41.038905 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:41.038922 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:41.039163 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:41.039205 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:41.039225 749135 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-446299"
I0920 18:13:41.039205 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:41.041287 749135 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 18:13:41.041290 749135 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 18:13:41.043438 749135 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 18:13:41.044297 749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:13:41.044713 749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 18:13:41.044732 749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 18:13:41.101841 749135 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:13:41.101863 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:41.130328 749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 18:13:41.130361 749135 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 18:13:41.246926 749135 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 18:13:41.246950 749135 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 18:13:41.330722 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:41.331217 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:41.367190 749135 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 18:13:41.375612 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.189187999s)
I0920 18:13:41.375679 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:41.375703 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:41.376082 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:41.376123 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:41.376131 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:41.376140 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:41.376180 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:41.376437 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:41.376461 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:41.376464 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:41.548363 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:41.701651 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:41.831758 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:41.831933 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:42.053967 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:42.331450 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:42.331860 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:42.559368 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:42.796101 749135 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.428861154s)
I0920 18:13:42.796164 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:42.796186 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:42.796539 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:42.796652 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:42.796628 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:42.796665 749135 main.go:141] libmachine: Making call to close driver server
I0920 18:13:42.796674 749135 main.go:141] libmachine: (addons-446299) Calling .Close
I0920 18:13:42.796931 749135 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:13:42.796948 749135 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:13:42.796971 749135 main.go:141] libmachine: (addons-446299) DBG | Closing plugin on server side
I0920 18:13:42.798018 749135 addons.go:475] Verifying addon gcp-auth=true in "addons-446299"
I0920 18:13:42.799750 749135 out.go:177] * Verifying gcp-auth addon...
I0920 18:13:42.801961 749135 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 18:13:42.813536 749135 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 18:13:42.813557 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:42.834100 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:42.834512 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:43.050004 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:43.305311 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:43.330407 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:43.331586 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:43.549945 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:43.702111 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:43.806287 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:43.830332 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:43.830560 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:44.050313 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:44.307181 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:44.332062 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:44.332579 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:44.549621 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:44.806074 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:44.830087 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:44.830821 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:45.049798 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:45.305355 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:45.329798 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:45.330472 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:45.549159 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:45.702368 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:45.805600 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:45.830331 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:45.831003 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:46.048681 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:46.476235 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:46.476881 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:46.477765 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:46.576766 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:46.805777 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:46.830583 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:46.831463 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:47.050496 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:47.307091 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:47.330512 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:47.331048 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:47.549305 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:47.805735 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:47.830215 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:47.831512 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:48.049902 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:48.202178 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:48.306243 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:48.329718 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:48.332280 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:48.550170 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:48.805429 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:48.829830 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:48.831490 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:49.050407 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:49.305950 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:49.331188 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:49.331284 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:49.549193 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:49.805377 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:49.831064 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:49.831335 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:50.050205 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:50.205469 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:50.306610 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:50.330226 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:50.331728 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:50.548853 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:50.806045 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:50.830924 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:50.831062 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:51.049036 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:51.305994 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:51.330295 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:51.330905 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:51.549433 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:51.805870 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:51.830479 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:51.831665 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:52.050500 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:52.305644 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:52.330460 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:52.330909 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:52.549056 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:52.700600 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:52.805458 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:52.829967 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:52.831274 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:53.049224 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:53.306145 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:53.330699 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:53.331032 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:53.548388 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:54.211235 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:54.211371 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:54.211581 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:54.212019 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:54.305931 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:54.332757 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:54.333316 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:54.550241 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:54.701439 749135 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"False"
I0920 18:13:54.805276 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:54.830616 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:54.831417 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:55.057083 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:55.305836 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:55.330687 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:55.331243 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:55.550673 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:55.701690 749135 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace has status "Ready":"True"
I0920 18:13:55.701725 749135 pod_ready.go:82] duration metric: took 18.50651845s for pod "nvidia-device-plugin-daemonset-6l2l2" in "kube-system" namespace to be "Ready" ...
I0920 18:13:55.701734 749135 pod_ready.go:39] duration metric: took 22.723049339s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 18:13:55.701754 749135 api_server.go:52] waiting for apiserver process to appear ...
I0920 18:13:55.701817 749135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 18:13:55.736899 749135 api_server.go:72] duration metric: took 25.619420852s to wait for apiserver process to appear ...
I0920 18:13:55.736929 749135 api_server.go:88] waiting for apiserver healthz status ...
I0920 18:13:55.736952 749135 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
I0920 18:13:55.741901 749135 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
ok
I0920 18:13:55.743609 749135 api_server.go:141] control plane version: v1.31.1
I0920 18:13:55.743635 749135 api_server.go:131] duration metric: took 6.69997ms to wait for apiserver health ...
I0920 18:13:55.743646 749135 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 18:13:55.757231 749135 system_pods.go:59] 17 kube-system pods found
I0920 18:13:55.757585 749135 system_pods.go:61] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
I0920 18:13:55.757615 749135 system_pods.go:61] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 18:13:55.757633 749135 system_pods.go:61] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 18:13:55.757647 749135 system_pods.go:61] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 18:13:55.757654 749135 system_pods.go:61] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
I0920 18:13:55.757662 749135 system_pods.go:61] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
I0920 18:13:55.757668 749135 system_pods.go:61] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
I0920 18:13:55.757677 749135 system_pods.go:61] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
I0920 18:13:55.757682 749135 system_pods.go:61] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
I0920 18:13:55.757689 749135 system_pods.go:61] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
I0920 18:13:55.757697 749135 system_pods.go:61] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 18:13:55.757705 749135 system_pods.go:61] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
I0920 18:13:55.757714 749135 system_pods.go:61] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 18:13:55.757725 749135 system_pods.go:61] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 18:13:55.757738 749135 system_pods.go:61] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 18:13:55.757750 749135 system_pods.go:61] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 18:13:55.757759 749135 system_pods.go:61] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
I0920 18:13:55.757770 749135 system_pods.go:74] duration metric: took 14.117036ms to wait for pod list to return data ...
I0920 18:13:55.757782 749135 default_sa.go:34] waiting for default service account to be created ...
I0920 18:13:55.762579 749135 default_sa.go:45] found service account: "default"
I0920 18:13:55.762610 749135 default_sa.go:55] duration metric: took 4.817698ms for default service account to be created ...
I0920 18:13:55.762622 749135 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 18:13:55.772780 749135 system_pods.go:86] 17 kube-system pods found
I0920 18:13:55.772808 749135 system_pods.go:89] "coredns-7c65d6cfc9-8b5fx" [226fc466-f0b5-4501-8879-b8b9b8d758ac] Running
I0920 18:13:55.772816 749135 system_pods.go:89] "csi-hostpath-attacher-0" [b131974d-0f4b-4bc6-bec3-d4c797279aa4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 18:13:55.772822 749135 system_pods.go:89] "csi-hostpath-resizer-0" [684355d7-d68e-4357-8103-d8350a38ea37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 18:13:55.772830 749135 system_pods.go:89] "csi-hostpathplugin-fcmx5" [1576357c-2e2c-469a-b069-dcac225f49c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 18:13:55.772834 749135 system_pods.go:89] "etcd-addons-446299" [c82607ca-b677-4592-935a-a32dad76e79c] Running
I0920 18:13:55.772839 749135 system_pods.go:89] "kube-apiserver-addons-446299" [93375989-de9f-4fea-afcc-44d35775ddd6] Running
I0920 18:13:55.772842 749135 system_pods.go:89] "kube-controller-manager-addons-446299" [4c06855c-f18c-4df4-bd04-584c8594a744] Running
I0920 18:13:55.772847 749135 system_pods.go:89] "kube-ingress-dns-minikube" [631849c1-f984-4e83-b07b-6b2ed4eb0697] Running
I0920 18:13:55.772851 749135 system_pods.go:89] "kube-proxy-9pcgb" [934faade-c115-4ced-9bb6-c22a2fe014f2] Running
I0920 18:13:55.772856 749135 system_pods.go:89] "kube-scheduler-addons-446299" [ce4ce9a3-dd64-47ed-a920-b6c5359c80a7] Running
I0920 18:13:55.772865 749135 system_pods.go:89] "metrics-server-84c5f94fbc-dgfgh" [84513540-b090-4d24-b6e0-9ed764434018] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 18:13:55.772922 749135 system_pods.go:89] "nvidia-device-plugin-daemonset-6l2l2" [c6db8268-e330-413b-9107-88c63f861e42] Running
I0920 18:13:55.772931 749135 system_pods.go:89] "registry-66c9cd494c-vxc6t" [10b4cecb-c85b-45ef-8043-e88a81971d51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0920 18:13:55.772936 749135 system_pods.go:89] "registry-proxy-bqdmf" [11ab987d-a80f-412a-8a15-03a5898a2e9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 18:13:55.772946 749135 system_pods.go:89] "snapshot-controller-56fcc65765-4qwlb" [d4cd83fc-a074-4317-9b02-22010ae0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 18:13:55.772953 749135 system_pods.go:89] "snapshot-controller-56fcc65765-8rk95" [63d1f200-a587-488c-82d3-bf38586a6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 18:13:55.772957 749135 system_pods.go:89] "storage-provisioner" [0e9e378d-208e-46e0-a2be-70f96e59408a] Running
I0920 18:13:55.772963 749135 system_pods.go:126] duration metric: took 10.336403ms to wait for k8s-apps to be running ...
I0920 18:13:55.772972 749135 system_svc.go:44] waiting for kubelet service to be running ....
I0920 18:13:55.773018 749135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0920 18:13:55.793348 749135 system_svc.go:56] duration metric: took 20.361414ms WaitForService to wait for kubelet
I0920 18:13:55.793389 749135 kubeadm.go:582] duration metric: took 25.675912921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 18:13:55.793417 749135 node_conditions.go:102] verifying NodePressure condition ...
I0920 18:13:55.802544 749135 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0920 18:13:55.802600 749135 node_conditions.go:123] node cpu capacity is 2
I0920 18:13:55.802617 749135 node_conditions.go:105] duration metric: took 9.193115ms to run NodePressure ...
I0920 18:13:55.802639 749135 start.go:241] waiting for startup goroutines ...
I0920 18:13:55.807268 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:55.834016 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:55.834628 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:56.049150 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:56.305873 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:56.331424 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:56.331798 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:56.550328 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:56.806065 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:56.829659 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:56.830161 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:57.049081 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:57.306075 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:57.329355 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:57.330540 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:57.549591 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:57.805900 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:57.830374 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:57.832330 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:58.049092 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:58.306271 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:58.329770 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:58.331160 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:58.922331 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:58.923063 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:58.923163 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:58.924173 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:59.050995 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:59.306609 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:59.410277 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:13:59.410618 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:59.549349 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:13:59.806119 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:13:59.829906 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:13:59.830124 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:00.049161 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:00.306487 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:00.330117 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:00.331103 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:00.549561 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:00.806760 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:00.831148 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:00.831297 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:01.050001 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:01.306298 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:01.407860 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:01.408083 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:01.548728 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:01.806320 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:01.830021 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:01.830689 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:02.048991 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:02.305521 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:02.330400 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:02.331175 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:02.549048 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:02.805598 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:02.830127 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:02.830327 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:03.049629 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:03.305858 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:03.331322 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:03.331679 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:03.548558 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:03.820166 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:03.830589 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:03.832021 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:04.465452 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:04.465905 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:04.465965 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:04.466066 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:04.565162 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:04.805221 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:04.830427 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:04.830573 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:05.050021 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:05.305449 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:05.330307 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:05.331288 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:05.549216 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:05.805952 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:05.830822 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:05.830882 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:06.048888 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:06.305947 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:06.330556 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:06.330915 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:06.549018 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:06.806964 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:06.841818 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:06.843261 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:07.048576 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:07.305982 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:07.330357 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:07.330437 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:07.549676 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:07.813909 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:07.830340 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:07.830795 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:08.050020 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:08.306364 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:08.330678 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:08.332935 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:08.548619 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:08.805004 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:08.830441 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:08.831560 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:09.332291 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:09.333139 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:09.333782 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:09.335034 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:09.549087 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:09.805906 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:09.829949 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:09.830348 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:10.049303 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:10.306098 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:10.329817 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:10.330883 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:10.549227 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:10.951479 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:10.951670 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:10.951904 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:11.048505 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:11.306899 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:11.330827 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:11.331176 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:11.549848 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:11.805719 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:11.830262 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:11.830606 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:12.059649 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:12.305971 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:12.329961 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:12.330563 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:12.549966 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:12.804939 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:12.829214 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:12.830837 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:13.048395 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:13.305641 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:13.331438 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:13.331605 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:13.549421 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:13.805919 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:13.831661 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:13.831730 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:14.049399 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:14.306300 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:14.329818 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:14.330774 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:14.552222 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:14.806365 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:14.829698 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:14.831887 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:15.048953 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:15.305618 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:15.330650 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:15.330943 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:15.548777 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:15.806132 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:15.830944 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 18:14:15.831352 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:16.052172 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:16.306342 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:16.329653 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:16.330883 749135 kapi.go:107] duration metric: took 37.504199599s to wait for kubernetes.io/minikube-addons=registry ...
I0920 18:14:16.548598 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:16.805754 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:16.830184 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:17.049843 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:17.383048 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:17.383735 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:17.550278 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:17.806058 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:17.829341 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:18.051596 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:18.306388 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:18.334664 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:18.552534 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:18.806897 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:18.830308 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:19.050045 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:19.306131 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:19.329862 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:19.550696 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:19.807045 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:19.829977 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:20.048666 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:20.306256 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:20.329911 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:20.550226 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:20.806144 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:20.830855 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:21.049583 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:21.310640 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:21.412808 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:21.549653 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:21.805953 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:21.829404 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:22.049850 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:22.315829 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:22.331862 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:22.549120 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:22.806085 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:22.829986 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:23.049654 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:23.306266 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:23.330058 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:23.560251 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:23.807013 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:23.830715 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:24.049404 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:24.306201 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:24.330512 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:24.595031 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:24.806293 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:24.907159 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:25.048965 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:25.305513 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:25.331059 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:25.549920 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:25.805287 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:25.830246 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:26.048992 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:26.306656 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:26.329987 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:26.549698 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:26.808992 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:26.829741 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:27.052649 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:27.312773 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:27.331951 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:27.562526 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:27.805604 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:27.830050 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:28.067172 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:28.306333 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:28.330924 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:28.550567 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:28.807713 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:28.836265 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:29.049440 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:29.305994 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:29.329628 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:29.551265 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:29.807081 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:29.829169 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:30.051607 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:30.308200 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:30.331298 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:30.553108 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:30.822844 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:30.831353 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:31.049853 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:31.305139 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:31.329419 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:31.549350 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:31.806142 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:31.829483 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:32.053013 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:32.306129 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:32.330537 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:32.771680 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:32.806908 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:32.831303 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:33.050163 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:33.305068 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:33.330437 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:33.548440 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:33.806177 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:33.830995 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:34.049496 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 18:14:34.310365 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:34.329994 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:34.548907 749135 kapi.go:107] duration metric: took 53.50460724s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 18:14:34.805871 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:34.830222 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:35.306762 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:35.330726 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:35.806453 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:35.830187 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:36.305548 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:36.330510 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:36.806443 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:36.829844 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:37.306287 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:37.330018 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:37.806187 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:37.829944 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:38.306428 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:38.330700 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:38.806275 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:38.830764 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:39.305577 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:39.330471 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:39.806014 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:39.829683 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:40.306572 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:40.329962 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:40.806663 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:40.830402 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:41.305985 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:41.329856 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:41.807066 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:41.829842 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:42.305779 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:42.330575 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:42.805256 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:42.829665 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:43.305345 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:43.329924 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:43.805970 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:43.829619 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:44.305067 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:44.330110 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:44.807165 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:44.832428 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:45.307073 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:45.329430 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:45.807239 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:45.829759 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:46.305795 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:46.330660 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:46.807307 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:46.829950 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:47.306710 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:47.330054 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:47.806495 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:47.830576 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:48.305615 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:48.330601 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:48.805326 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:48.829994 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:49.306221 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:49.330067 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:49.807517 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:49.831847 749135 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0920 18:14:50.312486 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:50.412022 749135 kapi.go:107] duration metric: took 1m11.586419635s to wait for app.kubernetes.io/name=ingress-nginx ...
I0920 18:14:50.805525 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:51.306784 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:51.919819 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:52.306451 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:52.809242 749135 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 18:14:53.318752 749135 kapi.go:107] duration metric: took 1m10.516788064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 18:14:53.320395 749135 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-446299 cluster.
I0920 18:14:53.321854 749135 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 18:14:53.323252 749135 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 18:14:53.324985 749135 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0920 18:14:53.326283 749135 addons.go:510] duration metric: took 1m23.208765269s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0920 18:14:53.326342 749135 start.go:246] waiting for cluster config update ...
I0920 18:14:53.326365 749135 start.go:255] writing updated cluster config ...
I0920 18:14:53.326710 749135 ssh_runner.go:195] Run: rm -f paused
I0920 18:14:53.387365 749135 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 18:14:53.389186 749135 out.go:177] * Done! kubectl is now configured to use "addons-446299" cluster and "default" namespace by default
==> CRI-O <==
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161037157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652161011535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c366805b-8fc5-449b-8a34-0eb221c7c5a9 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161492548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.161789555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.162350790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aec5cff0-8ac5-4c3a-8da4-904baed10a0c name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.205837527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=907a9f02-f0df-4dd8-ae4e-3239499b76bf name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.205931540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=907a9f02-f0df-4dd8-ae4e-3239499b76bf name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.207904691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce22261b-3160-4de8-9c74-bfebf86d864d name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.209248139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652209214846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce22261b-3160-4de8-9c74-bfebf86d864d name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.209973165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.210046076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.210566655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb47e070-a12f-4c44-a30d-58c927d97261 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.251300631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb1108cc-2fe6-4e14-933c-f89daa517176 name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.251393675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb1108cc-2fe6-4e14-933c-f89daa517176 name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.253095856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8001f8ff-df77-48fc-9a11-02a6c84ace44 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254136859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652254108980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8001f8ff-df77-48fc-9a11-02a6c84ace44 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254673857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.254791771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.255291250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6faea495-80e8-41f4-89e2-b8c4aa56f8de name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.294206005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20f698b6-288e-4e17-b668-2eeceff0183f name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.294282207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20f698b6-288e-4e17-b668-2eeceff0183f name=/runtime.v1.RuntimeService/Version
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.295357162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc09133c-7298-4721-be20-21ba4944e6c9 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.297137865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856652297112557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc09133c-7298-4721-be20-21ba4944e6c9 name=/runtime.v1.ImageService/ImageFsInfo
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.297977866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.298039927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
Sep 20 18:24:12 addons-446299 crio[659]: time="2024-09-20 18:24:12.299047350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228,PodSandboxId:efe0ec0dcbcc2ed97a1516bf84bf6944f46cc3c709619429a3f8a6ed7ec20db4,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726856092713670363,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-9scf7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e1fe9053-9c74-44c1-b9eb-33e656a4810b,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"con
tainerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba7dc5faa58b70f8ae294e26f758d07d8a41941a4b50201e68cc018c51a0c741,PodSandboxId:75840320e52800f1f44b2e6c517cc9307855642595e4a7055201d0ba2d030659,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726856089744039479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-8kt58,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 91004bb0-5831-431e-8777-5e
8e4b5296bc,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b094e7c30c796bf0bee43b60b80d46621df4bbd767dc91c732eb3b7bfa0bb00c,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1726856074238826249,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed98529d363a04b2955c02104f56e8a3cd80d69b45b2e1944ff3b0b7c189288,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef0019
58d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1726856072837441671,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69da68d150b2a5583b7305709c1c4bbf0f0a8590d238d599504b11d9ad7b529e,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc4
16abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1726856070768208336,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9ca7a3ca987a47ab5b416daf04522a3b27c6339db4003eb231d16ece603a60,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256
:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1726856069831000814,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2b6759c0bf97ff3d4de314ce5ca4e5311a8546b342d1ec787ca3a1624f8908,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metad
ata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1726856068009772282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66723f0443fe259bbab9521031456f7833339138ca42ab655fadf6bafc2136c5,PodSandboxId:00b4d98c2977
96e0eb1b921793bddbf0c466ffdc076d60dd27517a349c2d3749,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1726856066130067570,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 684355d7-d68e-4357-8103-d8350a38ea37,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c917700eb77472b431699f7e3b8ffa5e99fb0c6e7b94da0e7dc3e5d789ff7866,Pod
SandboxId:3ffd6a03ee49011ec8d222722b52204537020ec67831669422b18f2722d276e2,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1726856064693574171,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131974d-0f4b-4bc6-bec3-d4c797279aa4,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:509b6bbf231a9f6acf9ed9b5a160d57af8fe6ce822
d14a360f1c69aead3f9d36,PodSandboxId:eccc7c4b1b4ceb976b58527d35bc07ccd05bd16d28b808c1ddbf66aa21d69fe4,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1726856062559192499,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fcmx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1576357c-2e2c-469a-b069-dcac225f49c4,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86a2c89e146b1f6fc31a26a2e49b335f8ae30c35e76d7136b68425260628fef,PodSandboxId:a24f9a7c284879488d62c5c3a7402fbdc7b2ff55b494a70888c8b4b46593c754,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061202431069,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2mwr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: afcf3275-77b0-49cd-b425-e1c3fe89fe90,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:bf44e059a196a437fcc79e35dc09edc08e7e7fa8799df9f5556af7ec52f8bbcc,PodSandboxId:1938162f1608400bc041a5b0473880759f6d77d6783afec076342b08458fb334,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726856061156977853,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sdwls,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8334b2c4-8b09-408c-8652-46103ce6f6c6,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f5bce9e468f1d83d07951514190608f5cb1a2826158632ec7e66e3d069b730,PodSandboxId:46ab05da30745fa494969aa465b9ae41146fb457dd17388f6f0fbfa7637de4b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059566643922,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-4qwlb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4cd83fc-a074-4317-9b02-22010ae0ca66,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf93216045927e57562a5ef14225eebdfc0b71d50b89062312728787ee2e82f,PodSandboxId:f64e4538489ab0114de17e1f8f0c98d3d95618162fa5d2ed9b3853eb59a75d77,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726856059450265287,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-8rk95,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d1f200-a587-488c-82d3-bf38586a6fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3b736165a009635770dffd427114f8d374e28f83f090924a030c124eb4b844,PodSandboxId:dd8942402304fc3849ddaac3cd53c37f8af44d3a68106d3633546f78cb29c992,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726856057582231326,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-dgfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84513540-b090-4d24-b6e0-9ed764434018,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de,PodSandboxId:34301f7252ea6eae961095d9413f9fdd3ef14ea8253d18e0da80e4ed2b715059,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726856055896888672,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bqdmf,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ab987d-a80f-412a-8a15-03a5898a2e9e,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b425ff4f976afe3cb61d35934638e72a10e0094f7b61f40352a2fee42636302f,PodSandboxId:a0bef6fd3ee4b307210dd0ac0e2746329872520eb77ba21f03f92566351704f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,
CreatedAt:1726856046927873598,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-tvbgx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: b4d58283-346f-437d-adfb-34215341023e,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3,PodSandboxId:63f0d2722ba276dd3b36e061448a39004477c837ce53a11da2279149998eaf3a,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:75ef5b734af47dc41ff2fb442f287ee08c7da31dddb37596
16a8f693f0f346a0,State:CONTAINER_EXITED,CreatedAt:1726856041231263633,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-66c9cd494c-vxc6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b4cecb-c85b-45ef-8043-e88a81971d51,},Annotations:map[string]string{io.kubernetes.container.hash: 49fa49ac,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68195d8abd2e36c4e6f93a25bf60ca76fde83bf77a850a92b5213e7653c8414e,PodSandboxId:50aa8158427c9580c2a5ec7846daa046ebdb66adcc3769f3b811e9bfd73dee74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726856026660615460,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 631849c1-f984-4e83-b07b-6b2ed4eb0697,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0,PodSandboxId:2de8a3616c78216796d1a30e49390fa1880efae5c01dc6d060c3a9fc52733244,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{
Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726856016407131102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9e378d-208e-46e0-a2be-70f96e59408a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a,PodSandboxId:a7fdf4add17f82634ceda8e2a8ce96fc2312b21d1e4bcabce0730c45dba99a5b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc4
8af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726856014256879968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8b5fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 226fc466-f0b5-4501-8879-b8b9b8d758ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a
7cb0255780a4516c7256a6,PodSandboxId:5aa37b64d2a9c61038f28fea479857487cf0c835df5704953ae6496a18553faf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726856011173606981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9pcgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934faade-c115-4ced-9bb6-c22a2fe014f2,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072,PodSandbox
Id:4306bc0f35baa7738aceb1c5a0dfcf9c43a7541ffb8e1e463f1d2bfb3b4ddf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726856000251287780,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f419eac436c5a6f133bb67c6a198274,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c,PodSandboxId:403b403cdf2182
5fc57049326772376016cc8b60292a2666bdde28fa4d9d97d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726856000260280505,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da0809c41e3f89be51ba1d85d92334c0,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551,PodSandboxId:17de22cbd91b4d025017f1149b32f21
68ea0cac728b75d80f78ab208ff3de7aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726856000233156133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86ddc6bc2cc035d3de8f8c47a04894ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e,PodSandboxId:859cc747f1c82c2cfec8fa47af83f84bb172224df65a7adc26b7cd23a8e2bb3d,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726856000241829850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-446299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c1dc236d6aa092754be85db9af15d9,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16eee780-ea42-4060-88b6-35af41558f76 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
7c4b9c3a7c539 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b 9 minutes ago Running gcp-auth 0 efe0ec0dcbcc2 gcp-auth-89d5ffd79-9scf7
ba7dc5faa58b7 registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6 9 minutes ago Running controller 0 75840320e5280 ingress-nginx-controller-bc57996ff-8kt58
b094e7c30c796 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 9 minutes ago Running csi-snapshotter 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
bed98529d363a registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 9 minutes ago Running csi-provisioner 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
69da68d150b2a registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 9 minutes ago Running liveness-probe 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
fd9ca7a3ca987 registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 9 minutes ago Running hostpath 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
5a2b6759c0bf9 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc 9 minutes ago Running node-driver-registrar 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
66723f0443fe2 registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 9 minutes ago Running csi-resizer 0 00b4d98c29779 csi-hostpath-resizer-0
c917700eb7747 registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 9 minutes ago Running csi-attacher 0 3ffd6a03ee490 csi-hostpath-attacher-0
509b6bbf231a9 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 9 minutes ago Running csi-external-health-monitor-controller 0 eccc7c4b1b4ce csi-hostpathplugin-fcmx5
e86a2c89e146b ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242 9 minutes ago Exited patch 1 a24f9a7c28487 ingress-nginx-admission-patch-2mwr8
bf44e059a196a registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012 9 minutes ago Exited create 0 1938162f16084 ingress-nginx-admission-create-sdwls
33f5bce9e468f registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 9 minutes ago Running volume-snapshot-controller 0 46ab05da30745 snapshot-controller-56fcc65765-4qwlb
cbf9321604592 registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 9 minutes ago Running volume-snapshot-controller 0 f64e4538489ab snapshot-controller-56fcc65765-8rk95
3c3b736165a00 registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a 9 minutes ago Running metrics-server 0 dd8942402304f metrics-server-84c5f94fbc-dgfgh
b425ff4f976af docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 10 minutes ago Running local-path-provisioner 0 a0bef6fd3ee4b local-path-provisioner-86d989889c-tvbgx
68195d8abd2e3 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 10 minutes ago Running minikube-ingress-dns 0 50aa8158427c9 kube-ingress-dns-minikube
123e17c57dc2a 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 10 minutes ago Running storage-provisioner 0 2de8a3616c782 storage-provisioner
d52dc29cba22a c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 10 minutes ago Running coredns 0 a7fdf4add17f8 coredns-7c65d6cfc9-8b5fx
371fb9f89e965 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561 10 minutes ago Running kube-proxy 0 5aa37b64d2a9c kube-proxy-9pcgb
730952f4127d6 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee 10 minutes ago Running kube-apiserver 0 403b403cdf218 kube-apiserver-addons-446299
e9e7734f58847 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b 10 minutes ago Running kube-scheduler 0 4306bc0f35baa kube-scheduler-addons-446299
a8af18aadd9a1 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1 10 minutes ago Running kube-controller-manager 0 859cc747f1c82 kube-controller-manager-addons-446299
402ab000bdb93 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4 10 minutes ago Running etcd 0 17de22cbd91b4 etcd-addons-446299
==> coredns [d52dc29cba22a178059e3f5273c57de1362df61bcd21abc9ad9c5058087ed31a] <==
[INFO] 127.0.0.1:45092 - 31226 "HINFO IN 8537533385009167611.1098357581305743543. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017946303s
[INFO] 10.244.0.7:50895 - 60070 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000864499s
[INFO] 10.244.0.7:50895 - 30883 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.004754851s
[INFO] 10.244.0.7:60479 - 45291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000276551s
[INFO] 10.244.0.7:60479 - 60648 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259587s
[INFO] 10.244.0.7:34337 - 50221 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103649s
[INFO] 10.244.0.7:34337 - 3119 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000190818s
[INFO] 10.244.0.7:50579 - 48699 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149541s
[INFO] 10.244.0.7:50579 - 13882 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00029954s
[INFO] 10.244.0.7:52674 - 19194 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100903s
[INFO] 10.244.0.7:52674 - 48616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131897s
[INFO] 10.244.0.7:34842 - 24908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052174s
[INFO] 10.244.0.7:34842 - 17742 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131345s
[INFO] 10.244.0.7:58542 - 36156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047177s
[INFO] 10.244.0.7:58542 - 62014 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148973s
[INFO] 10.244.0.7:34082 - 14251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145316s
[INFO] 10.244.0.7:34082 - 45485 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000238133s
[INFO] 10.244.0.21:56997 - 31030 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000537673s
[INFO] 10.244.0.21:35720 - 34441 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147988s
[INFO] 10.244.0.21:53795 - 23425 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001554s
[INFO] 10.244.0.21:58869 - 385 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122258s
[INFO] 10.244.0.21:37326 - 35127 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023415s
[INFO] 10.244.0.21:35448 - 47752 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126595s
[INFO] 10.244.0.21:41454 - 25870 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003639103s
[INFO] 10.244.0.21:51708 - 51164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00402176s
==> describe nodes <==
Name: addons-446299
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-446299
kubernetes.io/os=linux
minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
minikube.k8s.io/name=addons-446299
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T18_13_25_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-446299
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-446299"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 18:13:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-446299
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 18:24:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 18:23:27 +0000 Fri, 20 Sep 2024 18:13:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 18:23:27 +0000 Fri, 20 Sep 2024 18:13:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 18:23:27 +0000 Fri, 20 Sep 2024 18:13:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 18:23:27 +0000 Fri, 20 Sep 2024 18:13:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.237
Hostname: addons-446299
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 6b51819720d24a4988f4faf5cbed4e8f
System UUID: 6b518197-20d2-4a49-88f4-faf5cbed4e8f
Boot ID: 431228fc-f5a8-4282-bf7e-10c36798659f
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43s
default registry-test 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62s
default task-pv-pod-restore 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35s
gcp-auth gcp-auth-89d5ffd79-9scf7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
ingress-nginx ingress-nginx-controller-bc57996ff-8kt58 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-8b5fx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 10m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system csi-hostpathplugin-fcmx5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system etcd-addons-446299 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 10m
kube-system kube-apiserver-addons-446299 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-controller-manager-addons-446299 200m (10%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-proxy-9pcgb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-scheduler-addons-446299 100m (5%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system metrics-server-84c5f94fbc-dgfgh 100m (5%) 0 (0%) 200Mi (5%) 0 (0%) 10m
kube-system snapshot-controller-56fcc65765-4qwlb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system snapshot-controller-56fcc65765-8rk95 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
local-path-storage local-path-provisioner-86d989889c-tvbgx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (12%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kube-proxy
Normal Starting 10m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 10m kubelet Node addons-446299 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m kubelet Node addons-446299 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m kubelet Node addons-446299 status is now: NodeHasSufficientPID
Normal NodeReady 10m kubelet Node addons-446299 status is now: NodeReady
Normal RegisteredNode 10m node-controller Node addons-446299 event: Registered Node addons-446299 in Controller
==> dmesg <==
[ +0.086501] kauditd_printk_skb: 69 callbacks suppressed
[ +5.305303] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
[ +0.141616] kauditd_printk_skb: 18 callbacks suppressed
[ +5.046436] kauditd_printk_skb: 135 callbacks suppressed
[ +5.120665] kauditd_printk_skb: 83 callbacks suppressed
[ +5.997269] kauditd_printk_skb: 97 callbacks suppressed
[ +8.458196] kauditd_printk_skb: 5 callbacks suppressed
[Sep20 18:14] kauditd_printk_skb: 2 callbacks suppressed
[ +5.706525] kauditd_printk_skb: 34 callbacks suppressed
[ +16.244583] kauditd_printk_skb: 28 callbacks suppressed
[ +5.135040] kauditd_printk_skb: 70 callbacks suppressed
[ +5.940354] kauditd_printk_skb: 30 callbacks suppressed
[ +13.767745] kauditd_printk_skb: 3 callbacks suppressed
[ +5.007018] kauditd_printk_skb: 48 callbacks suppressed
[Sep20 18:15] kauditd_printk_skb: 10 callbacks suppressed
[Sep20 18:16] kauditd_printk_skb: 30 callbacks suppressed
[Sep20 18:17] kauditd_printk_skb: 28 callbacks suppressed
[Sep20 18:20] kauditd_printk_skb: 28 callbacks suppressed
[Sep20 18:22] kauditd_printk_skb: 28 callbacks suppressed
[Sep20 18:23] kauditd_printk_skb: 6 callbacks suppressed
[ +5.877503] kauditd_printk_skb: 40 callbacks suppressed
[ +5.382620] kauditd_printk_skb: 41 callbacks suppressed
[ +8.681981] kauditd_printk_skb: 39 callbacks suppressed
[ +13.570039] kauditd_printk_skb: 14 callbacks suppressed
[Sep20 18:24] kauditd_printk_skb: 2 callbacks suppressed
==> etcd [402ab000bdb9360b9d14054aa336dc4312504e85cd5336ba788bcc24a74fb551] <==
{"level":"warn","ts":"2024-09-20T18:14:32.753190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.800719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:14:32.753227Z","caller":"traceutil/trace.go:171","msg":"trace[543841858] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1058; }","duration":"350.836502ms","start":"2024-09-20T18:14:32.402385Z","end":"2024-09-20T18:14:32.753221Z","steps":["trace[543841858] 'agreement among raft nodes before linearized reading' (duration: 350.779906ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:14:32.753246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:14:32.402356Z","time spent":"350.885838ms","remote":"127.0.0.1:36780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2024-09-20T18:14:32.753338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.730876ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:14:32.753372Z","caller":"traceutil/trace.go:171","msg":"trace[1542998802] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1058; }","duration":"340.769961ms","start":"2024-09-20T18:14:32.412597Z","end":"2024-09-20T18:14:32.753367Z","steps":["trace[1542998802] 'agreement among raft nodes before linearized reading' (duration: 340.724283ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:14:32.753846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.265355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:14:32.753903Z","caller":"traceutil/trace.go:171","msg":"trace[581069886] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1058; }","duration":"217.327931ms","start":"2024-09-20T18:14:32.536567Z","end":"2024-09-20T18:14:32.753895Z","steps":["trace[581069886] 'agreement among raft nodes before linearized reading' (duration: 217.246138ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:14:51.903628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.538818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2024-09-20T18:14:51.904065Z","caller":"traceutil/trace.go:171","msg":"trace[2043860769] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1117; }","duration":"144.082045ms","start":"2024-09-20T18:14:51.759954Z","end":"2024-09-20T18:14:51.904036Z","steps":["trace[2043860769] 'count revisions from in-memory index tree' (duration: 143.478073ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:14:51.904831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.923374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:14:51.904891Z","caller":"traceutil/trace.go:171","msg":"trace[386261722] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"111.005288ms","start":"2024-09-20T18:14:51.793876Z","end":"2024-09-20T18:14:51.904881Z","steps":["trace[386261722] 'range keys from in-memory index tree' (duration: 110.882796ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T18:23:04.403949Z","caller":"traceutil/trace.go:171","msg":"trace[1232773900] linearizableReadLoop","detail":"{readStateIndex:2064; appliedIndex:2063; }","duration":"137.955638ms","start":"2024-09-20T18:23:04.265959Z","end":"2024-09-20T18:23:04.403914Z","steps":["trace[1232773900] 'read index received' (duration: 137.83631ms)","trace[1232773900] 'applied index is now lower than readState.Index' (duration: 118.922µs)"],"step_count":2}
{"level":"warn","ts":"2024-09-20T18:23:04.404190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.160514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:23:04.404218Z","caller":"traceutil/trace.go:171","msg":"trace[1586547199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1925; }","duration":"138.254725ms","start":"2024-09-20T18:23:04.265955Z","end":"2024-09-20T18:23:04.404210Z","steps":["trace[1586547199] 'agreement among raft nodes before linearized reading' (duration: 138.105756ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T18:23:04.404422Z","caller":"traceutil/trace.go:171","msg":"trace[700372140] transaction","detail":"{read_only:false; response_revision:1925; number_of_response:1; }","duration":"379.764994ms","start":"2024-09-20T18:23:04.024645Z","end":"2024-09-20T18:23:04.404410Z","steps":["trace[700372140] 'process raft request' (duration: 379.19458ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:23:04.404517Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:04.024622Z","time spent":"379.814521ms","remote":"127.0.0.1:36928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2024-09-20T18:23:21.256394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1506}
{"level":"info","ts":"2024-09-20T18:23:21.288238Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1506,"took":"31.314726ms","hash":517065302,"current-db-size-bytes":7016448,"current-db-size":"7.0 MB","current-db-size-in-use-bytes":4055040,"current-db-size-in-use":"4.1 MB"}
{"level":"info","ts":"2024-09-20T18:23:21.288299Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":517065302,"revision":1506,"compact-revision":-1}
{"level":"info","ts":"2024-09-20T18:23:22.430993Z","caller":"traceutil/trace.go:171","msg":"trace[200479020] transaction","detail":"{read_only:false; response_revision:2108; number_of_response:1; }","duration":"314.888557ms","start":"2024-09-20T18:23:22.116093Z","end":"2024-09-20T18:23:22.430981Z","steps":["trace[200479020] 'process raft request' (duration: 314.552392ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T18:23:22.431107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:23:22.116078Z","time spent":"314.951125ms","remote":"127.0.0.1:37058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2038 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
{"level":"info","ts":"2024-09-20T18:23:23.865254Z","caller":"traceutil/trace.go:171","msg":"trace[102178879] linearizableReadLoop","detail":"{readStateIndex:2258; appliedIndex:2257; }","duration":"203.488059ms","start":"2024-09-20T18:23:23.661753Z","end":"2024-09-20T18:23:23.865241Z","steps":["trace[102178879] 'read index received' (duration: 203.347953ms)","trace[102178879] 'applied index is now lower than readState.Index' (duration: 139.623µs)"],"step_count":2}
{"level":"warn","ts":"2024-09-20T18:23:23.865357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.585815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T18:23:23.865380Z","caller":"traceutil/trace.go:171","msg":"trace[1945616439] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2110; }","duration":"203.624964ms","start":"2024-09-20T18:23:23.661749Z","end":"2024-09-20T18:23:23.865374Z","steps":["trace[1945616439] 'agreement among raft nodes before linearized reading' (duration: 203.546895ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T18:23:23.865639Z","caller":"traceutil/trace.go:171","msg":"trace[1429413700] transaction","detail":"{read_only:false; response_revision:2110; number_of_response:1; }","duration":"210.845365ms","start":"2024-09-20T18:23:23.654785Z","end":"2024-09-20T18:23:23.865631Z","steps":["trace[1429413700] 'process raft request' (duration: 210.352466ms)"],"step_count":1}
==> gcp-auth [7c4b9c3a7c53984fdcd53d01df116f55695ae712f2f303bd6c13b7f7ae352228] <==
2024/09/20 18:14:53 Ready to write response ...
2024/09/20 18:14:55 Ready to marshal response ...
2024/09/20 18:14:55 Ready to write response ...
2024/09/20 18:14:55 Ready to marshal response ...
2024/09/20 18:14:55 Ready to write response ...
2024/09/20 18:22:59 Ready to marshal response ...
2024/09/20 18:22:59 Ready to write response ...
2024/09/20 18:22:59 Ready to marshal response ...
2024/09/20 18:22:59 Ready to write response ...
2024/09/20 18:22:59 Ready to marshal response ...
2024/09/20 18:22:59 Ready to write response ...
2024/09/20 18:23:05 Ready to marshal response ...
2024/09/20 18:23:05 Ready to write response ...
2024/09/20 18:23:05 Ready to marshal response ...
2024/09/20 18:23:05 Ready to write response ...
2024/09/20 18:23:10 Ready to marshal response ...
2024/09/20 18:23:10 Ready to write response ...
2024/09/20 18:23:15 Ready to marshal response ...
2024/09/20 18:23:15 Ready to write response ...
2024/09/20 18:23:18 Ready to marshal response ...
2024/09/20 18:23:18 Ready to write response ...
2024/09/20 18:23:29 Ready to marshal response ...
2024/09/20 18:23:29 Ready to write response ...
2024/09/20 18:23:37 Ready to marshal response ...
2024/09/20 18:23:37 Ready to write response ...
==> kernel <==
18:24:12 up 11 min, 0 users, load average: 0.30, 0.37, 0.32
Linux addons-446299 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [730952f4127d66b35d731eb28568293e71789263c71a1a0255283cb51922992c] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0920 18:15:27.823202 1 handler_proxy.go:99] no RequestInfo found in the context
E0920 18:15:27.823313 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0920 18:15:27.823420 1 handler_proxy.go:99] no RequestInfo found in the context
E0920 18:15:27.823588 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0920 18:15:27.824490 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0920 18:15:27.825326 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0920 18:15:31.828151 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.147.48:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
W0920 18:15:31.828390 1 handler_proxy.go:99] no RequestInfo found in the context
E0920 18:15:31.828450 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0920 18:15:31.847786 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0920 18:15:31.853561 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
I0920 18:22:59.185908 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.29.221"}
I0920 18:23:23.918494 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0920 18:23:25.009930 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0920 18:23:29.482103 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0920 18:23:29.675487 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.190.241"}
I0920 18:23:30.728395 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [a8af18aadd9a198bf616d46a7b451c4aa04e96f96e40f4b3bfe6f0ed2db6278e] <==
I0920 18:23:05.584802 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="139.5µs"
I0920 18:23:05.648638 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="24.833813ms"
I0920 18:23:05.649442 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="34.713µs"
I0920 18:23:12.832142 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="3.409µs"
I0920 18:23:15.306992 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
I0920 18:23:15.981923 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="7.686µs"
I0920 18:23:22.948309 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
E0920 18:23:25.011628 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:23:26.072079 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:23:26.072150 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 18:23:27.852981 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-446299"
W0920 18:23:28.821603 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:23:28.821738 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 18:23:29.815035 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0920 18:23:29.815092 1 shared_informer.go:320] Caches are synced for resource quota
I0920 18:23:30.342351 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0920 18:23:30.342391 1 shared_informer.go:320] Caches are synced for garbage collector
I0920 18:23:34.046159 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
W0920 18:23:34.852782 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:23:34.852837 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:23:45.509339 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:23:45.509390 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 18:24:03.134228 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 18:24:03.134359 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 18:24:11.155220 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.255µs"
==> kube-proxy [371fb9f89e965c1d1f23b67cb00baa69dc199d2d1a7cb0255780a4516c7256a6] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0920 18:13:32.095684 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0920 18:13:32.111185 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
E0920 18:13:32.111246 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 18:13:32.254832 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0920 18:13:32.254884 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0920 18:13:32.254908 1 server_linux.go:169] "Using iptables Proxier"
I0920 18:13:32.262039 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 18:13:32.262450 1 server.go:483] "Version info" version="v1.31.1"
I0920 18:13:32.262484 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 18:13:32.268397 1 config.go:199] "Starting service config controller"
I0920 18:13:32.268443 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 18:13:32.268473 1 config.go:105] "Starting endpoint slice config controller"
I0920 18:13:32.268477 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 18:13:32.268988 1 config.go:328] "Starting node config controller"
I0920 18:13:32.268994 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 18:13:32.368877 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0920 18:13:32.368886 1 shared_informer.go:320] Caches are synced for service config
I0920 18:13:32.369073 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [e9e7734f588477ea0c8338b75bff4c99d2033144998f9977041fbf99b5880072] <==
W0920 18:13:22.809246 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0920 18:13:22.809282 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 18:13:22.809585 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0920 18:13:22.809621 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:22.813253 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0920 18:13:22.813298 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 18:13:22.813377 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0920 18:13:22.813413 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:22.813464 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0920 18:13:22.813478 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 18:13:22.815129 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 18:13:22.815174 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:23.637031 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0920 18:13:23.637068 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 18:13:23.746262 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0920 18:13:23.746361 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:23.943434 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 18:13:23.943536 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:23.956043 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 18:13:23.956129 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 18:13:23.968884 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0920 18:13:23.969017 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 18:13:24.340405 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 18:13:24.340516 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0920 18:13:27.096843 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 20 18:23:49 addons-446299 kubelet[1199]: E0920 18:23:49.166554 1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
Sep 20 18:23:55 addons-446299 kubelet[1199]: E0920 18:23:55.504765 1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856635504269792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 20 18:23:55 addons-446299 kubelet[1199]: E0920 18:23:55.505037 1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856635504269792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.344000 1199 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.344419 1199 kuberuntime_image.go:55] "Failed to pull image" err="copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.345259 1199 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zg4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(e00699c2-7689-43aa-9a79-f6b8682fbe91): ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Sep 20 18:24:01 addons-446299 kubelet[1199]: E0920 18:24:01.348855 1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
Sep 20 18:24:02 addons-446299 kubelet[1199]: E0920 18:24:02.283634 1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="e00699c2-7689-43aa-9a79-f6b8682fbe91"
Sep 20 18:24:04 addons-446299 kubelet[1199]: E0920 18:24:04.166489 1199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="785bf044-a4fc-4f3b-aa48-f0c32d84c0cb"
Sep 20 18:24:05 addons-446299 kubelet[1199]: E0920 18:24:05.507516 1199 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856645506796293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 20 18:24:05 addons-446299 kubelet[1199]: E0920 18:24:05.508092 1199 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726856645506796293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:519753,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.653223 1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqgbp\" (UniqueName: \"kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp\") pod \"11ab987d-a80f-412a-8a15-03a5898a2e9e\" (UID: \"11ab987d-a80f-412a-8a15-03a5898a2e9e\") "
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.653316 1199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvc9z\" (UniqueName: \"kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z\") pod \"10b4cecb-c85b-45ef-8043-e88a81971d51\" (UID: \"10b4cecb-c85b-45ef-8043-e88a81971d51\") "
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.656121 1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z" (OuterVolumeSpecName: "kube-api-access-nvc9z") pod "10b4cecb-c85b-45ef-8043-e88a81971d51" (UID: "10b4cecb-c85b-45ef-8043-e88a81971d51"). InnerVolumeSpecName "kube-api-access-nvc9z". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.656658 1199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp" (OuterVolumeSpecName: "kube-api-access-zqgbp") pod "11ab987d-a80f-412a-8a15-03a5898a2e9e" (UID: "11ab987d-a80f-412a-8a15-03a5898a2e9e"). InnerVolumeSpecName "kube-api-access-zqgbp". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.754519 1199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nvc9z\" (UniqueName: \"kubernetes.io/projected/10b4cecb-c85b-45ef-8043-e88a81971d51-kube-api-access-nvc9z\") on node \"addons-446299\" DevicePath \"\""
Sep 20 18:24:11 addons-446299 kubelet[1199]: I0920 18:24:11.754562 1199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zqgbp\" (UniqueName: \"kubernetes.io/projected/11ab987d-a80f-412a-8a15-03a5898a2e9e-kube-api-access-zqgbp\") on node \"addons-446299\" DevicePath \"\""
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.347684 1199 scope.go:117] "RemoveContainer" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.390802 1199 scope.go:117] "RemoveContainer" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
Sep 20 18:24:12 addons-446299 kubelet[1199]: E0920 18:24:12.396527 1199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": container with ID starting with 5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3 not found: ID does not exist" containerID="5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.396652 1199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3"} err="failed to get container status \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": rpc error: code = NotFound desc = could not find container \"5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3\": container with ID starting with 5bc1f72e1ea240845c1b51e886bddd626c5c1de271a30103c731f8c4931a84d3 not found: ID does not exist"
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.396753 1199 scope.go:117] "RemoveContainer" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.443568 1199 scope.go:117] "RemoveContainer" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
Sep 20 18:24:12 addons-446299 kubelet[1199]: E0920 18:24:12.444359 1199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": container with ID starting with c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de not found: ID does not exist" containerID="c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"
Sep 20 18:24:12 addons-446299 kubelet[1199]: I0920 18:24:12.444396 1199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de"} err="failed to get container status \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": rpc error: code = NotFound desc = could not find container \"c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de\": container with ID starting with c8bc74b520cd1d4dcf7bb82c116c356ff3d8c71b059d02bc9aa144a3677ff3de not found: ID does not exist"
==> storage-provisioner [123e17c57dc2abd9c047233f8257257a3994d71637992344add53ad7199bd9f0] <==
I0920 18:13:37.673799 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 18:13:37.889195 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 18:13:37.889268 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 18:13:37.991169 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 18:13:37.991374 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
I0920 18:13:37.992328 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e2a2b2a-26e5-43f5-ad91-442df4e21dfd", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8 became leader
I0920 18:13:38.191750 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-446299_0cfdff58-c718-409b-bc42-bb5f67205de8!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-446299 -n addons-446299
helpers_test.go:261: (dbg) Run: kubectl --context addons-446299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1 (91.465123ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-446299/192.168.39.237
Start Time: Fri, 20 Sep 2024 18:14:55 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.22
IPs:
IP: 10.244.0.22
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6l6f (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-s6l6f:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m18s default-scheduler Successfully assigned default/busybox to addons-446299
Normal Pulling 7m52s (x4 over 9m17s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m52s (x4 over 9m17s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning Failed 7m52s (x4 over 9m17s) kubelet Error: ErrImagePull
Warning Failed 7m28s (x6 over 9m16s) kubelet Error: ImagePullBackOff
Normal BackOff 4m6s (x20 over 9m16s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: addons-446299/192.168.39.237
Start Time: Fri, 20 Sep 2024 18:23:29 +0000
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zg4g (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8zg4g:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned default/nginx to addons-446299
Warning Failed 12s kubelet Failed to pull image "docker.io/nginx:alpine": copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 12s kubelet Error: ErrImagePull
Normal BackOff 11s kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 11s kubelet Error: ImagePullBackOff
Normal Pulling 0s (x2 over 43s) kubelet Pulling image "docker.io/nginx:alpine"
Name: registry-test
Namespace: default
Priority: 0
Service Account: default
Node: addons-446299/192.168.39.237
Start Time: Fri, 20 Sep 2024 18:23:10 +0000
Labels: run=registry-test
Annotations: <none>
Status: Terminating (lasts <invalid>)
Termination Grace Period: 30s
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
registry-test:
Container ID:
Image: gcr.io/k8s-minikube/busybox
Image ID:
Port: <none>
Host Port: <none>
Args:
sh
-c
wget --spider -S http://registry.kube-system.svc.cluster.local
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlk52 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zlk52:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 63s default-scheduler Successfully assigned default/registry-test to addons-446299
Warning Failed 47s (x2 over 62s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning Failed 47s (x2 over 62s) kubelet Error: ErrImagePull
Normal BackOff 34s (x2 over 62s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox"
Warning Failed 34s (x2 over 62s) kubelet Error: ImagePullBackOff
Normal Pulling 21s (x3 over 62s) kubelet Pulling image "gcr.io/k8s-minikube/busybox"
Name: task-pv-pod-restore
Namespace: default
Priority: 0
Service Account: default
Node: addons-446299/192.168.39.237
Start Time: Fri, 20 Sep 2024 18:23:37 +0000
Labels: app=task-pv-pod-restore
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
task-pv-container:
Container ID:
Image: docker.io/nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzgp9 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc-restore
ReadOnly: false
kube-api-access-zzgp9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 36s default-scheduler Successfully assigned default/task-pv-pod-restore to addons-446299
Normal Pulling 35s kubelet Pulling image "docker.io/nginx"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-sdwls" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-2mwr8" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-446299 describe pod busybox nginx registry-test task-pv-pod-restore ingress-nginx-admission-create-sdwls ingress-nginx-admission-patch-2mwr8: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.19s)