=== RUN TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry
=== CONT TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.21126ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003344255s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00471846s
addons_test.go:342: (dbg) Run: kubectl --context addons-959832 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.089974498s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p addons-959832 ip
2024/09/06 18:40:57 [DEBUG] GET http://192.168.39.98:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p addons-959832 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-959832 -n addons-959832
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-959832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 logs -n 25: (1.385983256s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | -p download-only-726386 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p download-only-726386 | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| start | -o=json --download-only | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | -p download-only-693029 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p download-only-693029 | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p download-only-726386 | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p download-only-693029 | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| start | --download-only -p | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | binary-mirror-071210 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:42457 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-071210 | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| addons | disable dashboard -p | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | addons-959832 | | | | | |
| addons | enable dashboard -p | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | addons-959832 | | | | | |
| start | -p addons-959832 --wait=true | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:31 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | disable inspektor-gadget -p | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:39 UTC | 06 Sep 24 18:39 UTC |
| | addons-959832 | | | | | |
| addons | addons-959832 addons disable | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | helm-tiller --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ssh | addons-959832 ssh curl -s | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | addons-959832 addons | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-959832 addons | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-959832 ssh cat | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | /opt/local-path-provisioner/pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9_default_test-pvc/file1 | | | | | |
| addons | addons-959832 addons disable | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-959832 addons disable | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ip | addons-959832 ip | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| addons | addons-959832 addons disable | addons-959832 | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/06 18:29:30
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0906 18:29:30.440394 13823 out.go:345] Setting OutFile to fd 1 ...
I0906 18:29:30.440643 13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:29:30.440652 13823 out.go:358] Setting ErrFile to fd 2...
I0906 18:29:30.440656 13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:29:30.440824 13823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:29:30.441460 13823 out.go:352] Setting JSON to false
I0906 18:29:30.442255 13823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":719,"bootTime":1725646651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0906 18:29:30.442312 13823 start.go:139] virtualization: kvm guest
I0906 18:29:30.444228 13823 out.go:177] * [addons-959832] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0906 18:29:30.445334 13823 notify.go:220] Checking for updates...
I0906 18:29:30.445342 13823 out.go:177] - MINIKUBE_LOCATION=19576
I0906 18:29:30.446652 13823 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0906 18:29:30.448060 13823 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
I0906 18:29:30.449528 13823 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
I0906 18:29:30.450779 13823 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0906 18:29:30.451986 13823 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0906 18:29:30.453700 13823 driver.go:394] Setting default libvirt URI to qemu:///system
I0906 18:29:30.485465 13823 out.go:177] * Using the kvm2 driver based on user configuration
I0906 18:29:30.486701 13823 start.go:297] selected driver: kvm2
I0906 18:29:30.486713 13823 start.go:901] validating driver "kvm2" against <nil>
I0906 18:29:30.486727 13823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0906 18:29:30.487397 13823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0906 18:29:30.487478 13823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0906 18:29:30.502694 13823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
I0906 18:29:30.502738 13823 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0906 18:29:30.502931 13823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0906 18:29:30.502959 13823 cni.go:84] Creating CNI manager for ""
I0906 18:29:30.502966 13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0906 18:29:30.502978 13823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0906 18:29:30.503026 13823 start.go:340] cluster config:
{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
I0906 18:29:30.503117 13823 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0906 18:29:30.504979 13823 out.go:177] * Starting "addons-959832" primary control-plane node in "addons-959832" cluster
I0906 18:29:30.506126 13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
I0906 18:29:30.506168 13823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
I0906 18:29:30.506178 13823 cache.go:56] Caching tarball of preloaded images
I0906 18:29:30.506272 13823 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0906 18:29:30.506286 13823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
I0906 18:29:30.506559 13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
I0906 18:29:30.506577 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json: {Name:mkb043cbbb2997cf908fb60acd39795871d65137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:29:30.506698 13823 start.go:360] acquireMachinesLock for addons-959832: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0906 18:29:30.506741 13823 start.go:364] duration metric: took 31.601µs to acquireMachinesLock for "addons-959832"
I0906 18:29:30.506759 13823 start.go:93] Provisioning new machine with config: &{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I0906 18:29:30.506820 13823 start.go:125] createHost starting for "" (driver="kvm2")
I0906 18:29:30.508432 13823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0906 18:29:30.508550 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:29:30.508587 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:29:30.522987 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
I0906 18:29:30.523384 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:29:30.523869 13823 main.go:141] libmachine: Using API Version 1
I0906 18:29:30.523890 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:29:30.524169 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:29:30.524345 13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
I0906 18:29:30.524450 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:30.524591 13823 start.go:159] libmachine.API.Create for "addons-959832" (driver="kvm2")
I0906 18:29:30.524624 13823 client.go:168] LocalClient.Create starting
I0906 18:29:30.524668 13823 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
I0906 18:29:30.595679 13823 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
I0906 18:29:30.708441 13823 main.go:141] libmachine: Running pre-create checks...
I0906 18:29:30.708464 13823 main.go:141] libmachine: (addons-959832) Calling .PreCreateCheck
I0906 18:29:30.708957 13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
I0906 18:29:30.709397 13823 main.go:141] libmachine: Creating machine...
I0906 18:29:30.709410 13823 main.go:141] libmachine: (addons-959832) Calling .Create
I0906 18:29:30.709556 13823 main.go:141] libmachine: (addons-959832) Creating KVM machine...
I0906 18:29:30.710795 13823 main.go:141] libmachine: (addons-959832) DBG | found existing default KVM network
I0906 18:29:30.711508 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.711378 13845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
I0906 18:29:30.711570 13823 main.go:141] libmachine: (addons-959832) DBG | created network xml:
I0906 18:29:30.711607 13823 main.go:141] libmachine: (addons-959832) DBG | <network>
I0906 18:29:30.711624 13823 main.go:141] libmachine: (addons-959832) DBG | <name>mk-addons-959832</name>
I0906 18:29:30.711646 13823 main.go:141] libmachine: (addons-959832) DBG | <dns enable='no'/>
I0906 18:29:30.711654 13823 main.go:141] libmachine: (addons-959832) DBG |
I0906 18:29:30.711661 13823 main.go:141] libmachine: (addons-959832) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0906 18:29:30.711668 13823 main.go:141] libmachine: (addons-959832) DBG | <dhcp>
I0906 18:29:30.711673 13823 main.go:141] libmachine: (addons-959832) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0906 18:29:30.711684 13823 main.go:141] libmachine: (addons-959832) DBG | </dhcp>
I0906 18:29:30.711691 13823 main.go:141] libmachine: (addons-959832) DBG | </ip>
I0906 18:29:30.711698 13823 main.go:141] libmachine: (addons-959832) DBG |
I0906 18:29:30.711706 13823 main.go:141] libmachine: (addons-959832) DBG | </network>
I0906 18:29:30.711714 13823 main.go:141] libmachine: (addons-959832) DBG |
I0906 18:29:30.716914 13823 main.go:141] libmachine: (addons-959832) DBG | trying to create private KVM network mk-addons-959832 192.168.39.0/24...
I0906 18:29:30.784502 13823 main.go:141] libmachine: (addons-959832) DBG | private KVM network mk-addons-959832 192.168.39.0/24 created
I0906 18:29:30.784548 13823 main.go:141] libmachine: (addons-959832) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
I0906 18:29:30.784580 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.784495 13845 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
I0906 18:29:30.784596 13823 main.go:141] libmachine: (addons-959832) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
I0906 18:29:30.784621 13823 main.go:141] libmachine: (addons-959832) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
I0906 18:29:31.031605 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.031496 13845 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa...
I0906 18:29:31.150285 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150157 13845 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk...
I0906 18:29:31.150312 13823 main.go:141] libmachine: (addons-959832) DBG | Writing magic tar header
I0906 18:29:31.150322 13823 main.go:141] libmachine: (addons-959832) DBG | Writing SSH key tar header
I0906 18:29:31.150329 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150306 13845 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
I0906 18:29:31.150514 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832
I0906 18:29:31.150551 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
I0906 18:29:31.150582 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 (perms=drwx------)
I0906 18:29:31.150604 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
I0906 18:29:31.150630 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
I0906 18:29:31.150652 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
I0906 18:29:31.150664 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
I0906 18:29:31.150681 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
I0906 18:29:31.150694 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0906 18:29:31.150709 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins
I0906 18:29:31.150726 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0906 18:29:31.150738 13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home
I0906 18:29:31.150755 13823 main.go:141] libmachine: (addons-959832) DBG | Skipping /home - not owner
I0906 18:29:31.150771 13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0906 18:29:31.150781 13823 main.go:141] libmachine: (addons-959832) Creating domain...
I0906 18:29:31.151641 13823 main.go:141] libmachine: (addons-959832) define libvirt domain using xml:
I0906 18:29:31.151668 13823 main.go:141] libmachine: (addons-959832) <domain type='kvm'>
I0906 18:29:31.151680 13823 main.go:141] libmachine: (addons-959832) <name>addons-959832</name>
I0906 18:29:31.151693 13823 main.go:141] libmachine: (addons-959832) <memory unit='MiB'>4000</memory>
I0906 18:29:31.151703 13823 main.go:141] libmachine: (addons-959832) <vcpu>2</vcpu>
I0906 18:29:31.151718 13823 main.go:141] libmachine: (addons-959832) <features>
I0906 18:29:31.151723 13823 main.go:141] libmachine: (addons-959832) <acpi/>
I0906 18:29:31.151727 13823 main.go:141] libmachine: (addons-959832) <apic/>
I0906 18:29:31.151736 13823 main.go:141] libmachine: (addons-959832) <pae/>
I0906 18:29:31.151741 13823 main.go:141] libmachine: (addons-959832)
I0906 18:29:31.151747 13823 main.go:141] libmachine: (addons-959832) </features>
I0906 18:29:31.151754 13823 main.go:141] libmachine: (addons-959832) <cpu mode='host-passthrough'>
I0906 18:29:31.151759 13823 main.go:141] libmachine: (addons-959832)
I0906 18:29:31.151772 13823 main.go:141] libmachine: (addons-959832) </cpu>
I0906 18:29:31.151779 13823 main.go:141] libmachine: (addons-959832) <os>
I0906 18:29:31.151788 13823 main.go:141] libmachine: (addons-959832) <type>hvm</type>
I0906 18:29:31.151795 13823 main.go:141] libmachine: (addons-959832) <boot dev='cdrom'/>
I0906 18:29:31.151801 13823 main.go:141] libmachine: (addons-959832) <boot dev='hd'/>
I0906 18:29:31.151808 13823 main.go:141] libmachine: (addons-959832) <bootmenu enable='no'/>
I0906 18:29:31.151812 13823 main.go:141] libmachine: (addons-959832) </os>
I0906 18:29:31.151818 13823 main.go:141] libmachine: (addons-959832) <devices>
I0906 18:29:31.151825 13823 main.go:141] libmachine: (addons-959832) <disk type='file' device='cdrom'>
I0906 18:29:31.151834 13823 main.go:141] libmachine: (addons-959832) <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/boot2docker.iso'/>
I0906 18:29:31.151841 13823 main.go:141] libmachine: (addons-959832) <target dev='hdc' bus='scsi'/>
I0906 18:29:31.151847 13823 main.go:141] libmachine: (addons-959832) <readonly/>
I0906 18:29:31.151853 13823 main.go:141] libmachine: (addons-959832) </disk>
I0906 18:29:31.151859 13823 main.go:141] libmachine: (addons-959832) <disk type='file' device='disk'>
I0906 18:29:31.151867 13823 main.go:141] libmachine: (addons-959832) <driver name='qemu' type='raw' cache='default' io='threads' />
I0906 18:29:31.151878 13823 main.go:141] libmachine: (addons-959832) <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk'/>
I0906 18:29:31.151886 13823 main.go:141] libmachine: (addons-959832) <target dev='hda' bus='virtio'/>
I0906 18:29:31.151894 13823 main.go:141] libmachine: (addons-959832) </disk>
I0906 18:29:31.151899 13823 main.go:141] libmachine: (addons-959832) <interface type='network'>
I0906 18:29:31.151908 13823 main.go:141] libmachine: (addons-959832) <source network='mk-addons-959832'/>
I0906 18:29:31.151915 13823 main.go:141] libmachine: (addons-959832) <model type='virtio'/>
I0906 18:29:31.151923 13823 main.go:141] libmachine: (addons-959832) </interface>
I0906 18:29:31.151931 13823 main.go:141] libmachine: (addons-959832) <interface type='network'>
I0906 18:29:31.151957 13823 main.go:141] libmachine: (addons-959832) <source network='default'/>
I0906 18:29:31.151984 13823 main.go:141] libmachine: (addons-959832) <model type='virtio'/>
I0906 18:29:31.151993 13823 main.go:141] libmachine: (addons-959832) </interface>
I0906 18:29:31.152008 13823 main.go:141] libmachine: (addons-959832) <serial type='pty'>
I0906 18:29:31.152028 13823 main.go:141] libmachine: (addons-959832) <target port='0'/>
I0906 18:29:31.152046 13823 main.go:141] libmachine: (addons-959832) </serial>
I0906 18:29:31.152059 13823 main.go:141] libmachine: (addons-959832) <console type='pty'>
I0906 18:29:31.152070 13823 main.go:141] libmachine: (addons-959832) <target type='serial' port='0'/>
I0906 18:29:31.152078 13823 main.go:141] libmachine: (addons-959832) </console>
I0906 18:29:31.152086 13823 main.go:141] libmachine: (addons-959832) <rng model='virtio'>
I0906 18:29:31.152095 13823 main.go:141] libmachine: (addons-959832) <backend model='random'>/dev/random</backend>
I0906 18:29:31.152103 13823 main.go:141] libmachine: (addons-959832) </rng>
I0906 18:29:31.152113 13823 main.go:141] libmachine: (addons-959832)
I0906 18:29:31.152126 13823 main.go:141] libmachine: (addons-959832)
I0906 18:29:31.152138 13823 main.go:141] libmachine: (addons-959832) </devices>
I0906 18:29:31.152148 13823 main.go:141] libmachine: (addons-959832) </domain>
I0906 18:29:31.152161 13823 main.go:141] libmachine: (addons-959832)
I0906 18:29:31.158081 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:b5:f5:6a in network default
I0906 18:29:31.158542 13823 main.go:141] libmachine: (addons-959832) Ensuring networks are active...
I0906 18:29:31.158562 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:31.159097 13823 main.go:141] libmachine: (addons-959832) Ensuring network default is active
I0906 18:29:31.159345 13823 main.go:141] libmachine: (addons-959832) Ensuring network mk-addons-959832 is active
I0906 18:29:31.159767 13823 main.go:141] libmachine: (addons-959832) Getting domain xml...
I0906 18:29:31.160314 13823 main.go:141] libmachine: (addons-959832) Creating domain...
I0906 18:29:32.546282 13823 main.go:141] libmachine: (addons-959832) Waiting to get IP...
I0906 18:29:32.547051 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:32.547580 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:32.547618 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.547518 13845 retry.go:31] will retry after 234.819193ms: waiting for machine to come up
I0906 18:29:32.783988 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:32.784398 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:32.784420 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.784350 13845 retry.go:31] will retry after 374.097016ms: waiting for machine to come up
I0906 18:29:33.159641 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:33.160076 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:33.160104 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.160024 13845 retry.go:31] will retry after 398.438198ms: waiting for machine to come up
I0906 18:29:33.559453 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:33.559850 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:33.559879 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.559800 13845 retry.go:31] will retry after 513.667683ms: waiting for machine to come up
I0906 18:29:34.075531 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:34.075976 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:34.076002 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.075937 13845 retry.go:31] will retry after 542.640322ms: waiting for machine to come up
I0906 18:29:34.620767 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:34.621139 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:34.621164 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.621100 13845 retry.go:31] will retry after 952.553494ms: waiting for machine to come up
I0906 18:29:35.575061 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:35.575519 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:35.575550 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:35.575475 13845 retry.go:31] will retry after 761.897484ms: waiting for machine to come up
I0906 18:29:36.339380 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:36.339747 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:36.339775 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:36.339696 13845 retry.go:31] will retry after 1.058974587s: waiting for machine to come up
I0906 18:29:37.399861 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:37.400184 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:37.400204 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:37.400146 13845 retry.go:31] will retry after 1.319275872s: waiting for machine to come up
I0906 18:29:38.720600 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:38.721039 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:38.721065 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:38.720974 13845 retry.go:31] will retry after 1.544734383s: waiting for machine to come up
I0906 18:29:40.267964 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:40.268338 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:40.268365 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:40.268303 13845 retry.go:31] will retry after 2.517498837s: waiting for machine to come up
I0906 18:29:42.790192 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:42.790620 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:42.790646 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:42.790574 13845 retry.go:31] will retry after 2.829630462s: waiting for machine to come up
I0906 18:29:45.621992 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:45.622542 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:45.622614 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:45.622535 13845 retry.go:31] will retry after 3.555249592s: waiting for machine to come up
I0906 18:29:49.181782 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:49.182176 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
I0906 18:29:49.182199 13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:49.182134 13845 retry.go:31] will retry after 4.155059883s: waiting for machine to come up
I0906 18:29:53.340058 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:53.340648 13823 main.go:141] libmachine: (addons-959832) Found IP for machine: 192.168.39.98
I0906 18:29:53.340677 13823 main.go:141] libmachine: (addons-959832) Reserving static IP address...
I0906 18:29:53.340693 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has current primary IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:53.341097 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "addons-959832", mac: "52:54:00:c2:2d:3d", ip: "192.168.39.98"} in network mk-addons-959832
I0906 18:29:53.410890 13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
I0906 18:29:53.410935 13823 main.go:141] libmachine: (addons-959832) Reserved static IP address: 192.168.39.98
I0906 18:29:53.410957 13823 main.go:141] libmachine: (addons-959832) Waiting for SSH to be available...
I0906 18:29:53.413061 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:53.413353 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832
I0906 18:29:53.413381 13823 main.go:141] libmachine: (addons-959832) DBG | unable to find defined IP address of network mk-addons-959832 interface with MAC address 52:54:00:c2:2d:3d
I0906 18:29:53.413528 13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
I0906 18:29:53.413551 13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
I0906 18:29:53.413582 13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
I0906 18:29:53.413596 13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
I0906 18:29:53.413610 13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
I0906 18:29:53.424764 13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: exit status 255:
I0906 18:29:53.424790 13823 main.go:141] libmachine: (addons-959832) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0906 18:29:53.424803 13823 main.go:141] libmachine: (addons-959832) DBG | command : exit 0
I0906 18:29:53.424811 13823 main.go:141] libmachine: (addons-959832) DBG | err : exit status 255
I0906 18:29:53.424834 13823 main.go:141] libmachine: (addons-959832) DBG | output :
I0906 18:29:56.425071 13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
I0906 18:29:56.427965 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.428313 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:56.428337 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.428498 13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
I0906 18:29:56.428529 13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
I0906 18:29:56.428584 13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
I0906 18:29:56.428611 13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
I0906 18:29:56.428625 13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
I0906 18:29:56.557151 13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: <nil>:
I0906 18:29:56.557379 13823 main.go:141] libmachine: (addons-959832) KVM machine creation complete!
I0906 18:29:56.557702 13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
I0906 18:29:56.558229 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:56.558444 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:56.558623 13823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0906 18:29:56.558641 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:29:56.559843 13823 main.go:141] libmachine: Detecting operating system of created instance...
I0906 18:29:56.559860 13823 main.go:141] libmachine: Waiting for SSH to be available...
I0906 18:29:56.559867 13823 main.go:141] libmachine: Getting to WaitForSSH function...
I0906 18:29:56.559876 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:56.562179 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.562551 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:56.562587 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.562760 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:56.562922 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.563071 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.563184 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:56.563323 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:56.563491 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:56.563501 13823 main.go:141] libmachine: About to run SSH command:
exit 0
I0906 18:29:56.672324 13823 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0906 18:29:56.672345 13823 main.go:141] libmachine: Detecting the provisioner...
I0906 18:29:56.672355 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:56.675030 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.675361 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:56.675396 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.675587 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:56.675810 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.675962 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.676117 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:56.676285 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:56.676485 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:56.676498 13823 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0906 18:29:56.789500 13823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0906 18:29:56.789599 13823 main.go:141] libmachine: found compatible host: buildroot
I0906 18:29:56.789615 13823 main.go:141] libmachine: Provisioning with buildroot...
I0906 18:29:56.789627 13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
I0906 18:29:56.789887 13823 buildroot.go:166] provisioning hostname "addons-959832"
I0906 18:29:56.789910 13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
I0906 18:29:56.790145 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:56.792479 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.792813 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:56.792840 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.792964 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:56.793128 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.793278 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.793413 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:56.793564 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:56.793755 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:56.793770 13823 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-959832 && echo "addons-959832" | sudo tee /etc/hostname
I0906 18:29:56.923171 13823 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-959832
I0906 18:29:56.923196 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:56.925829 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.926137 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:56.926165 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:56.926301 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:56.926516 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.926688 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:56.926855 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:56.927018 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:56.927167 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:56.927182 13823 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-959832' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959832/g' /etc/hosts;
else
echo '127.0.1.1 addons-959832' | sudo tee -a /etc/hosts;
fi
fi
I0906 18:29:57.047682 13823 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0906 18:29:57.047717 13823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
I0906 18:29:57.047760 13823 buildroot.go:174] setting up certificates
I0906 18:29:57.047779 13823 provision.go:84] configureAuth start
I0906 18:29:57.047796 13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
I0906 18:29:57.048060 13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
I0906 18:29:57.050451 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.050790 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.050828 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.050983 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.053241 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.053584 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.053615 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.053778 13823 provision.go:143] copyHostCerts
I0906 18:29:57.053849 13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
I0906 18:29:57.054015 13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
I0906 18:29:57.054086 13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
I0906 18:29:57.054144 13823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.addons-959832 san=[127.0.0.1 192.168.39.98 addons-959832 localhost minikube]
I0906 18:29:57.192700 13823 provision.go:177] copyRemoteCerts
I0906 18:29:57.192756 13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0906 18:29:57.192779 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.195474 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.195742 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.195770 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.195927 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.196116 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.196268 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.196488 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:29:57.284813 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0906 18:29:57.312554 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0906 18:29:57.338356 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0906 18:29:57.363612 13823 provision.go:87] duration metric: took 315.815529ms to configureAuth
I0906 18:29:57.363640 13823 buildroot.go:189] setting minikube options for container-runtime
I0906 18:29:57.363826 13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:29:57.363907 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.366452 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.366841 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.366868 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.367008 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.367195 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.367349 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.367475 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.367620 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:57.367765 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:57.367779 13823 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0906 18:29:57.603163 13823 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0906 18:29:57.603188 13823 main.go:141] libmachine: Checking connection to Docker...
I0906 18:29:57.603196 13823 main.go:141] libmachine: (addons-959832) Calling .GetURL
I0906 18:29:57.604560 13823 main.go:141] libmachine: (addons-959832) DBG | Using libvirt version 6000000
I0906 18:29:57.606895 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.607175 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.607201 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.607398 13823 main.go:141] libmachine: Docker is up and running!
I0906 18:29:57.607413 13823 main.go:141] libmachine: Reticulating splines...
I0906 18:29:57.607421 13823 client.go:171] duration metric: took 27.082788539s to LocalClient.Create
I0906 18:29:57.607447 13823 start.go:167] duration metric: took 27.082857245s to libmachine.API.Create "addons-959832"
I0906 18:29:57.607462 13823 start.go:293] postStartSetup for "addons-959832" (driver="kvm2")
I0906 18:29:57.607488 13823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0906 18:29:57.607514 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:57.607782 13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0906 18:29:57.607801 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.609814 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.610081 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.610134 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.610226 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.610417 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.610608 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.610769 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:29:57.695798 13823 ssh_runner.go:195] Run: cat /etc/os-release
I0906 18:29:57.700464 13823 info.go:137] Remote host: Buildroot 2023.02.9
I0906 18:29:57.700493 13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
I0906 18:29:57.700596 13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
I0906 18:29:57.700630 13823 start.go:296] duration metric: took 93.15804ms for postStartSetup
I0906 18:29:57.700663 13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
I0906 18:29:57.701257 13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
I0906 18:29:57.704196 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.704554 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.704585 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.704877 13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
I0906 18:29:57.705072 13823 start.go:128] duration metric: took 27.1982419s to createHost
I0906 18:29:57.705098 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.707499 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.707842 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.707862 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.708035 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.708256 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.708433 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.708569 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.708760 13823 main.go:141] libmachine: Using SSH client type: native
I0906 18:29:57.708991 13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil> [] 0s} 192.168.39.98 22 <nil> <nil>}
I0906 18:29:57.709005 13823 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0906 18:29:57.821756 13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647397.800291454
I0906 18:29:57.821779 13823 fix.go:216] guest clock: 1725647397.800291454
I0906 18:29:57.821789 13823 fix.go:229] Guest: 2024-09-06 18:29:57.800291454 +0000 UTC Remote: 2024-09-06 18:29:57.705083739 +0000 UTC m=+27.297090225 (delta=95.207715ms)
I0906 18:29:57.821840 13823 fix.go:200] guest clock delta is within tolerance: 95.207715ms
I0906 18:29:57.821853 13823 start.go:83] releasing machines lock for "addons-959832", held for 27.315095887s
I0906 18:29:57.821881 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:57.822185 13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
I0906 18:29:57.824591 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.824964 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.824991 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.825103 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:57.825621 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:57.825837 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:29:57.825955 13823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0906 18:29:57.825998 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.826048 13823 ssh_runner.go:195] Run: cat /version.json
I0906 18:29:57.826075 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:29:57.828396 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.828722 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.828752 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.828771 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.828910 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.829111 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.829201 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:57.829221 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:57.829287 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.829450 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:29:57.829463 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:29:57.829621 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:29:57.829749 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:29:57.829859 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:29:57.948786 13823 ssh_runner.go:195] Run: systemctl --version
I0906 18:29:57.955191 13823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0906 18:29:58.113311 13823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0906 18:29:58.119769 13823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0906 18:29:58.119846 13823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0906 18:29:58.135762 13823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0906 18:29:58.135789 13823 start.go:495] detecting cgroup driver to use...
I0906 18:29:58.135859 13823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0906 18:29:58.151729 13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0906 18:29:58.166404 13823 docker.go:217] disabling cri-docker service (if available) ...
I0906 18:29:58.166473 13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0906 18:29:58.180954 13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0906 18:29:58.195119 13823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0906 18:29:58.315328 13823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0906 18:29:58.467302 13823 docker.go:233] disabling docker service ...
I0906 18:29:58.467362 13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0906 18:29:58.482228 13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0906 18:29:58.495471 13823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0906 18:29:58.606896 13823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0906 18:29:58.717897 13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0906 18:29:58.732638 13823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0906 18:29:58.751394 13823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0906 18:29:58.751461 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.762265 13823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0906 18:29:58.762343 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.772625 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.783002 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.793237 13823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0906 18:29:58.804024 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.814731 13823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.832054 13823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0906 18:29:58.842905 13823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0906 18:29:58.852537 13823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0906 18:29:58.852595 13823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0906 18:29:58.866354 13823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0906 18:29:58.877194 13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0906 18:29:59.004604 13823 ssh_runner.go:195] Run: sudo systemctl restart crio
I0906 18:29:59.101439 13823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0906 18:29:59.101538 13823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0906 18:29:59.106286 13823 start.go:563] Will wait 60s for crictl version
I0906 18:29:59.106358 13823 ssh_runner.go:195] Run: which crictl
I0906 18:29:59.110304 13823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0906 18:29:59.148807 13823 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0906 18:29:59.148953 13823 ssh_runner.go:195] Run: crio --version
I0906 18:29:59.178394 13823 ssh_runner.go:195] Run: crio --version
I0906 18:29:59.210051 13823 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
I0906 18:29:59.211504 13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
I0906 18:29:59.214173 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:59.214515 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:29:59.214548 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:29:59.214703 13823 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0906 18:29:59.218969 13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0906 18:29:59.231960 13823 kubeadm.go:883] updating cluster {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0906 18:29:59.232084 13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
I0906 18:29:59.232129 13823 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:29:59.263727 13823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
I0906 18:29:59.263807 13823 ssh_runner.go:195] Run: which lz4
I0906 18:29:59.267901 13823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0906 18:29:59.271879 13823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0906 18:29:59.271906 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
I0906 18:30:00.584417 13823 crio.go:462] duration metric: took 1.316553716s to copy over tarball
I0906 18:30:00.584486 13823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0906 18:30:02.812933 13823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.228424681s)
I0906 18:30:02.812968 13823 crio.go:469] duration metric: took 2.22852468s to extract the tarball
I0906 18:30:02.812978 13823 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0906 18:30:02.850138 13823 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:30:02.893341 13823 crio.go:514] all images are preloaded for cri-o runtime.
I0906 18:30:02.893365 13823 cache_images.go:84] Images are preloaded, skipping loading
I0906 18:30:02.893375 13823 kubeadm.go:934] updating node { 192.168.39.98 8443 v1.31.0 crio true true} ...
I0906 18:30:02.893497 13823 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-959832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.98
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0906 18:30:02.893579 13823 ssh_runner.go:195] Run: crio config
I0906 18:30:02.943751 13823 cni.go:84] Creating CNI manager for ""
I0906 18:30:02.943774 13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0906 18:30:02.943794 13823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0906 18:30:02.943823 13823 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.98 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959832 NodeName:addons-959832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0906 18:30:02.943970 13823 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.98
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-959832"
kubeletExtraArgs:
node-ip: 192.168.39.98
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.98"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0906 18:30:02.944029 13823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0906 18:30:02.953978 13823 binaries.go:44] Found k8s binaries, skipping transfer
I0906 18:30:02.954045 13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0906 18:30:02.963215 13823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0906 18:30:02.979953 13823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0906 18:30:02.996152 13823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
I0906 18:30:03.012715 13823 ssh_runner.go:195] Run: grep 192.168.39.98 control-plane.minikube.internal$ /etc/hosts
I0906 18:30:03.016576 13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.98 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0906 18:30:03.028370 13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0906 18:30:03.151085 13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0906 18:30:03.168582 13823 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832 for IP: 192.168.39.98
I0906 18:30:03.168607 13823 certs.go:194] generating shared ca certs ...
I0906 18:30:03.168628 13823 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.168788 13823 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
I0906 18:30:03.299866 13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt ...
I0906 18:30:03.299897 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt: {Name:mke2b7c471d9f59e720011f7b10016af11ee9297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.300069 13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key ...
I0906 18:30:03.300084 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key: {Name:mkfac70472d4bba2ebe5c985be8bd475bcc6f548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.300181 13823 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
I0906 18:30:03.425280 13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt ...
I0906 18:30:03.425310 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt: {Name:mk08fa1d396d35f7ec100676e804094098a4d70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.425492 13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key ...
I0906 18:30:03.425520 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key: {Name:mk8fe87021c9d97780410b17544e3c226973cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.425623 13823 certs.go:256] generating profile certs ...
I0906 18:30:03.425675 13823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key
I0906 18:30:03.425689 13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt with IP's: []
I0906 18:30:03.659418 13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt ...
I0906 18:30:03.659450 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: {Name:mk0f9c2f503201837abe2d4909970e9be7ff24f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.659616 13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key ...
I0906 18:30:03.659626 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key: {Name:mkdc65ba0a6775a2f0eae4f7b7974195d86c87d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.659695 13823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e
I0906 18:30:03.659712 13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.98]
I0906 18:30:03.747012 13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e ...
I0906 18:30:03.747038 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e: {Name:mkac8ea9fd65a4ebd10dcac540165d914ce7db8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.747178 13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e ...
I0906 18:30:03.747192 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e: {Name:mk4a1ef0165a60b29c7ae52805cfb6305e8fcd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.747259 13823 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt
I0906 18:30:03.747327 13823 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key
I0906 18:30:03.747377 13823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key
I0906 18:30:03.747394 13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt with IP's: []
I0906 18:30:03.959127 13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt ...
I0906 18:30:03.959155 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt: {Name:mkde7bd5ab135e6d5e9a29c7a353c7a7ff8f667c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.959314 13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key ...
I0906 18:30:03.959329 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key: {Name:mkaff3d579d60be2767a53917ba5e3ae0b22c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:03.959489 13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
I0906 18:30:03.959520 13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
I0906 18:30:03.959543 13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
I0906 18:30:03.959565 13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
I0906 18:30:03.960109 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0906 18:30:03.987472 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0906 18:30:04.010859 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0906 18:30:04.045335 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0906 18:30:04.069442 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0906 18:30:04.096260 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0906 18:30:04.121182 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0906 18:30:04.149817 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0906 18:30:04.173890 13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0906 18:30:04.197498 13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0906 18:30:04.216950 13823 ssh_runner.go:195] Run: openssl version
I0906 18:30:04.222654 13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0906 18:30:04.233330 13823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:04.237701 13823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 6 18:30 /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:04.237760 13823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:04.243532 13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0906 18:30:04.256013 13823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0906 18:30:04.260734 13823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0906 18:30:04.260787 13823 kubeadm.go:392] StartCluster: {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0906 18:30:04.260898 13823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0906 18:30:04.260952 13823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0906 18:30:04.303067 13823 cri.go:89] found id: ""
I0906 18:30:04.303126 13823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0906 18:30:04.313281 13823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0906 18:30:04.324983 13823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0906 18:30:04.335214 13823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0906 18:30:04.335235 13823 kubeadm.go:157] found existing configuration files:
I0906 18:30:04.335277 13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0906 18:30:04.344648 13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0906 18:30:04.344695 13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0906 18:30:04.354421 13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0906 18:30:04.363814 13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0906 18:30:04.363883 13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0906 18:30:04.373191 13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0906 18:30:04.382426 13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0906 18:30:04.382489 13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0906 18:30:04.392389 13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0906 18:30:04.402110 13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0906 18:30:04.402181 13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0906 18:30:04.411730 13823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0906 18:30:04.463645 13823 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0906 18:30:04.463694 13823 kubeadm.go:310] [preflight] Running pre-flight checks
I0906 18:30:04.559431 13823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0906 18:30:04.559574 13823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0906 18:30:04.559691 13823 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0906 18:30:04.568785 13823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0906 18:30:04.633550 13823 out.go:235] - Generating certificates and keys ...
I0906 18:30:04.633656 13823 kubeadm.go:310] [certs] Using existing ca certificate authority
I0906 18:30:04.633738 13823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0906 18:30:04.850232 13823 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0906 18:30:05.028833 13823 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0906 18:30:05.198669 13823 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0906 18:30:05.265171 13823 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0906 18:30:05.396138 13823 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0906 18:30:05.396314 13823 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
I0906 18:30:05.615454 13823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0906 18:30:05.615825 13823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
I0906 18:30:05.699300 13823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0906 18:30:05.879000 13823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0906 18:30:05.979662 13823 kubeadm.go:310] [certs] Generating "sa" key and public key
I0906 18:30:05.979866 13823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0906 18:30:06.143465 13823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0906 18:30:06.399160 13823 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0906 18:30:06.612959 13823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0906 18:30:06.801192 13823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0906 18:30:06.957635 13823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0906 18:30:06.958075 13823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0906 18:30:06.960513 13823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0906 18:30:06.962637 13823 out.go:235] - Booting up control plane ...
I0906 18:30:06.962755 13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0906 18:30:06.962853 13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0906 18:30:06.962936 13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0906 18:30:06.982006 13823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0906 18:30:06.987635 13823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0906 18:30:06.987741 13823 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0906 18:30:07.107392 13823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0906 18:30:07.107507 13823 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0906 18:30:07.608684 13823 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950467ms
I0906 18:30:07.608794 13823 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0906 18:30:12.608494 13823 kubeadm.go:310] [api-check] The API server is healthy after 5.001776937s
I0906 18:30:12.627560 13823 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0906 18:30:12.653476 13823 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0906 18:30:12.689334 13823 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0906 18:30:12.689602 13823 kubeadm.go:310] [mark-control-plane] Marking the node addons-959832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0906 18:30:12.704990 13823 kubeadm.go:310] [bootstrap-token] Using token: ithoaf.u83bc4nltc0uwhpo
I0906 18:30:12.706456 13823 out.go:235] - Configuring RBAC rules ...
I0906 18:30:12.706574 13823 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0906 18:30:12.717372 13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0906 18:30:12.735384 13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0906 18:30:12.742188 13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0906 18:30:12.748903 13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0906 18:30:12.753193 13823 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0906 18:30:13.018036 13823 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0906 18:30:13.440120 13823 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0906 18:30:14.029827 13823 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0906 18:30:14.029853 13823 kubeadm.go:310]
I0906 18:30:14.029954 13823 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0906 18:30:14.029981 13823 kubeadm.go:310]
I0906 18:30:14.030093 13823 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0906 18:30:14.030104 13823 kubeadm.go:310]
I0906 18:30:14.030140 13823 kubeadm.go:310] mkdir -p $HOME/.kube
I0906 18:30:14.030226 13823 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0906 18:30:14.030309 13823 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0906 18:30:14.030318 13823 kubeadm.go:310]
I0906 18:30:14.030403 13823 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0906 18:30:14.030428 13823 kubeadm.go:310]
I0906 18:30:14.030488 13823 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0906 18:30:14.030498 13823 kubeadm.go:310]
I0906 18:30:14.030561 13823 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0906 18:30:14.030660 13823 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0906 18:30:14.030776 13823 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0906 18:30:14.030796 13823 kubeadm.go:310]
I0906 18:30:14.030915 13823 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0906 18:30:14.031015 13823 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0906 18:30:14.031028 13823 kubeadm.go:310]
I0906 18:30:14.031132 13823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
I0906 18:30:14.031273 13823 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
I0906 18:30:14.031306 13823 kubeadm.go:310] --control-plane
I0906 18:30:14.031316 13823 kubeadm.go:310]
I0906 18:30:14.031450 13823 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0906 18:30:14.031472 13823 kubeadm.go:310]
I0906 18:30:14.031592 13823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
I0906 18:30:14.031750 13823 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80
I0906 18:30:14.032620 13823 kubeadm.go:310] W0906 18:30:04.444733 823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0906 18:30:14.033044 13823 kubeadm.go:310] W0906 18:30:04.446560 823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0906 18:30:14.033225 13823 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0906 18:30:14.033247 13823 cni.go:84] Creating CNI manager for ""
I0906 18:30:14.033257 13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0906 18:30:14.035685 13823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0906 18:30:14.037043 13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0906 18:30:14.051040 13823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0906 18:30:14.080330 13823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0906 18:30:14.080403 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:14.080418 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959832 minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-959832 minikube.k8s.io/primary=true
I0906 18:30:14.123199 13823 ops.go:34] apiserver oom_adj: -16
I0906 18:30:14.247505 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:14.748250 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:15.248440 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:15.747562 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:16.247913 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:16.747636 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:17.248181 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:17.748128 13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:17.838400 13823 kubeadm.go:1113] duration metric: took 3.758062138s to wait for elevateKubeSystemPrivileges
I0906 18:30:17.838441 13823 kubeadm.go:394] duration metric: took 13.577657427s to StartCluster
I0906 18:30:17.838464 13823 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:17.838613 13823 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19576-6021/kubeconfig
I0906 18:30:17.839096 13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:17.839337 13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0906 18:30:17.839344 13823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I0906 18:30:17.839425 13823 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0906 18:30:17.839549 13823 addons.go:69] Setting yakd=true in profile "addons-959832"
I0906 18:30:17.839564 13823 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-959832"
I0906 18:30:17.839564 13823 addons.go:69] Setting helm-tiller=true in profile "addons-959832"
I0906 18:30:17.839600 13823 addons.go:69] Setting storage-provisioner=true in profile "addons-959832"
I0906 18:30:17.839601 13823 addons.go:69] Setting inspektor-gadget=true in profile "addons-959832"
I0906 18:30:17.839616 13823 addons.go:234] Setting addon storage-provisioner=true in "addons-959832"
I0906 18:30:17.839621 13823 addons.go:234] Setting addon inspektor-gadget=true in "addons-959832"
I0906 18:30:17.839625 13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:30:17.839635 13823 addons.go:234] Setting addon helm-tiller=true in "addons-959832"
I0906 18:30:17.839624 13823 addons.go:69] Setting ingress-dns=true in profile "addons-959832"
I0906 18:30:17.839656 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.839680 13823 addons.go:234] Setting addon ingress-dns=true in "addons-959832"
I0906 18:30:17.839708 13823 addons.go:69] Setting metrics-server=true in profile "addons-959832"
I0906 18:30:17.839721 13823 addons.go:69] Setting gcp-auth=true in profile "addons-959832"
I0906 18:30:17.839706 13823 addons.go:69] Setting ingress=true in profile "addons-959832"
I0906 18:30:17.839737 13823 addons.go:234] Setting addon metrics-server=true in "addons-959832"
I0906 18:30:17.839738 13823 mustload.go:65] Loading cluster: addons-959832
I0906 18:30:17.839744 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.839683 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.839951 13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:30:17.840149 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.840201 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.840215 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.840233 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.839763 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.840319 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.840341 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.840156 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.839590 13823 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-959832"
I0906 18:30:17.840465 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.840490 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.839591 13823 addons.go:69] Setting registry=true in profile "addons-959832"
I0906 18:30:17.840596 13823 addons.go:234] Setting addon registry=true in "addons-959832"
I0906 18:30:17.840637 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.840665 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.840688 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.841280 13823 out.go:177] * Verifying Kubernetes components...
I0906 18:30:17.839582 13823 addons.go:234] Setting addon yakd=true in "addons-959832"
I0906 18:30:17.841416 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.839685 13823 addons.go:69] Setting volcano=true in profile "addons-959832"
I0906 18:30:17.841566 13823 addons.go:234] Setting addon volcano=true in "addons-959832"
I0906 18:30:17.839689 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.841626 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.841783 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.841812 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.841859 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.839695 13823 addons.go:69] Setting cloud-spanner=true in profile "addons-959832"
I0906 18:30:17.841931 13823 addons.go:234] Setting addon cloud-spanner=true in "addons-959832"
I0906 18:30:17.841963 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.841970 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.841989 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.841816 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.842303 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.842321 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.842543 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.842595 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.839696 13823 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-959832"
I0906 18:30:17.842884 13823 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959832"
I0906 18:30:17.839699 13823 addons.go:69] Setting volumesnapshots=true in profile "addons-959832"
I0906 18:30:17.839713 13823 addons.go:69] Setting default-storageclass=true in profile "addons-959832"
I0906 18:30:17.839762 13823 addons.go:234] Setting addon ingress=true in "addons-959832"
I0906 18:30:17.842835 13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0906 18:30:17.839705 13823 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-959832"
I0906 18:30:17.843210 13823 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-959832"
I0906 18:30:17.843351 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.843531 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.843563 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.843835 13823 addons.go:234] Setting addon volumesnapshots=true in "addons-959832"
I0906 18:30:17.843857 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.844006 13823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-959832"
I0906 18:30:17.844352 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.844369 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.853075 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.861521 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
I0906 18:30:17.862212 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.862927 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.862953 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.863254 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
I0906 18:30:17.863342 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.863358 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
I0906 18:30:17.864034 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.864195 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.864234 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.864508 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.864529 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.864924 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.868974 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.869351 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.869398 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.869553 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.869575 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.879527 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
I0906 18:30:17.879542 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
I0906 18:30:17.879654 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
I0906 18:30:17.879684 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
I0906 18:30:17.879760 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.881648 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.885011 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.885160 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
I0906 18:30:17.885420 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.885459 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.885971 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.886011 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.886343 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.886375 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.886602 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.886665 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.886686 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.886716 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.886809 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
I0906 18:30:17.886904 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
I0906 18:30:17.887101 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.887199 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.887215 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.887238 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.887599 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.888208 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.888371 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.888383 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.888541 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.888561 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.888566 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.888701 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.888711 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.888743 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.888754 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.888780 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.889687 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.889730 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.889761 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.889889 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.889901 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.889943 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.889978 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.890062 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.890069 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.890553 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.890607 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.891323 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.891899 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.891930 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.892658 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.892934 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.893002 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.893143 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.893184 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.893806 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.893854 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.894913 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.894960 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.895352 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.895805 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.895847 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.897573 13823 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0906 18:30:17.899434 13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0906 18:30:17.899459 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0906 18:30:17.899481 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.903071 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.903469 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.903516 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.903739 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.903926 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.904048 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.904161 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.911366 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
I0906 18:30:17.912019 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.912706 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.912741 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.913185 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.913911 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.913970 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.916304 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
I0906 18:30:17.916921 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.917609 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.917631 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.918094 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.918809 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.918849 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.920068 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
I0906 18:30:17.920527 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.921055 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.921080 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.921442 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.921621 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.923561 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.924047 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
I0906 18:30:17.924598 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.925400 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.925427 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.925816 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.925833 13823 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0906 18:30:17.926025 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.927332 13823 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0906 18:30:17.927362 13823 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0906 18:30:17.927413 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.928541 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.931169 13823 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0906 18:30:17.932027 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.932560 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.932588 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.932970 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
I0906 18:30:17.933032 13823 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0906 18:30:17.933049 13823 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0906 18:30:17.933073 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.933158 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.933325 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.933426 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.933566 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.934213 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.934915 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.934933 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.935404 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.935557 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
I0906 18:30:17.935722 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.936009 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.936810 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
I0906 18:30:17.937524 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.938126 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.938143 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.938211 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.938388 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.938402 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.938499 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.938891 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.938931 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.938946 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.938969 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.939155 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.939625 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.939703 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.939744 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.939784 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.939923 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.940763 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.941678 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.943308 13823 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0906 18:30:17.943311 13823 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0906 18:30:17.944079 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
I0906 18:30:17.944771 13823 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0906 18:30:17.944801 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0906 18:30:17.944819 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.944775 13823 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0906 18:30:17.944907 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0906 18:30:17.944920 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.948201 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.948657 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.948689 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.948842 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.949234 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.949990 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.950029 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.950282 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.950943 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
I0906 18:30:17.950969 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.950989 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.951044 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
I0906 18:30:17.951238 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.951466 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.951515 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.951465 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.952056 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.952066 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.952073 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.952082 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.952138 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.952155 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.952344 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.952631 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.952687 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.952826 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.952846 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.953106 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.953314 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.953375 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
I0906 18:30:17.953914 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.953936 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.954109 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.954862 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
I0906 18:30:17.955016 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.955377 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.955393 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.955452 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.955793 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.955962 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.955973 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
I0906 18:30:17.956660 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.956816 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.956830 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.957324 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.957345 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.957414 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.957813 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.957859 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.958442 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.958480 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.959016 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.960122 13823 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-959832"
I0906 18:30:17.960157 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.960504 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.960508 13823 addons.go:234] Setting addon default-storageclass=true in "addons-959832"
I0906 18:30:17.960533 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.960553 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:17.960773 13823 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0906 18:30:17.960927 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.960957 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.961028 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
I0906 18:30:17.963299 13823 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
I0906 18:30:17.963616 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.964149 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.964171 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.964676 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.964848 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.965817 13823 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0906 18:30:17.966420 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.967088 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
I0906 18:30:17.967322 13823 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0906 18:30:17.967345 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0906 18:30:17.967363 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.967560 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.968670 13823 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0906 18:30:17.969763 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.969781 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.970095 13823 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0906 18:30:17.970112 13823 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0906 18:30:17.970131 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.970337 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
I0906 18:30:17.970743 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.971382 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.971385 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.971412 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.972059 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.972078 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.972319 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.972519 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.972712 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.972912 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.973203 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.974390 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.974410 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.975147 13823 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0906 18:30:17.975803 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.976343 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.976370 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.976539 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.976705 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.976816 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.976940 13823 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0906 18:30:17.976955 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0906 18:30:17.976970 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.977663 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.978180 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.978553 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.980971 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.981520 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.981539 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.981727 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.981897 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.982079 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.982239 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.983455 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
I0906 18:30:17.983619 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
I0906 18:30:17.984075 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.984656 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.984672 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.984763 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
I0906 18:30:17.984898 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.985019 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.985969 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.985992 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.986044 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.986161 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.986175 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.986855 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.986875 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.987256 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.987509 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.988050 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.988397 13823 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0906 18:30:17.988950 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
I0906 18:30:17.989105 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.989288 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.989355 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.989528 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.989938 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.989956 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.990021 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:17.990028 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:17.990027 13823 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0906 18:30:17.990240 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:17.990252 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:17.990260 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:17.990268 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:17.990348 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.990523 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:17.990554 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:17.990563 13823 main.go:141] libmachine: Making call to close connection to plugin binary
W0906 18:30:17.990634 13823 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0906 18:30:17.990673 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:17.990882 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.991485 13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:17.991505 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0906 18:30:17.991523 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.992446 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0906 18:30:17.992494 13823 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0906 18:30:17.992990 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
I0906 18:30:17.993671 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:17.994204 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0906 18:30:17.994221 13823 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0906 18:30:17.994276 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:17.994304 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:17.994314 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:17.994319 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:17.994675 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.994705 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:17.995095 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.995127 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.995287 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.995320 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0906 18:30:17.995468 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.995609 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:17.995687 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:17.995715 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:17.995789 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:17.996063 13823 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0906 18:30:17.997430 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.997701 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0906 18:30:17.997900 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:17.997927 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:17.998085 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:17.998251 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:17.999429 13823 out.go:177] - Using image docker.io/registry:2.8.3
I0906 18:30:18.000423 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
I0906 18:30:18.000443 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:18.000610 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0906 18:30:18.000700 13823 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0906 18:30:18.000713 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0906 18:30:18.000733 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:18.000992 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:18.001111 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:18.001653 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:18.001671 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:18.002038 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:18.002683 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:18.002727 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:18.003368 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0906 18:30:18.003618 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.003952 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:18.003970 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.004139 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:18.004273 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:18.004359 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:18.004434 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:18.005728 13823 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0906 18:30:18.006862 13823 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0906 18:30:18.007852 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0906 18:30:18.007870 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0906 18:30:18.007888 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:18.010752 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.011133 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:18.011162 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.011278 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:18.011435 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:18.011556 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:18.011677 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:18.019869 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
I0906 18:30:18.025324 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:18.025853 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:18.025867 13823 main.go:141] libmachine: () Calling .SetConfigRaw
W0906 18:30:18.026199 13823 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
I0906 18:30:18.026228 13823 retry.go:31] will retry after 165.921545ms: ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
I0906 18:30:18.026287 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:18.026483 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:18.028221 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:18.028440 13823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:18.028451 13823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0906 18:30:18.028463 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:18.030594 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.030951 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:18.030970 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.031122 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:18.031278 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:18.031416 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:18.031526 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:18.046424 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
I0906 18:30:18.046881 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:18.047847 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:18.047876 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:18.048219 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:18.048439 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:18.050153 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:18.052332 13823 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0906 18:30:18.054123 13823 out.go:177] - Using image docker.io/busybox:stable
I0906 18:30:18.055683 13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0906 18:30:18.055715 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0906 18:30:18.055735 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:18.058890 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.059267 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:18.059308 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:18.059467 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:18.059660 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:18.059835 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:18.059965 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:18.325758 13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0906 18:30:18.325780 13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0906 18:30:18.462745 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:18.498367 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0906 18:30:18.542161 13823 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0906 18:30:18.542189 13823 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0906 18:30:18.544357 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0906 18:30:18.544383 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0906 18:30:18.562318 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0906 18:30:18.591769 13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0906 18:30:18.592321 13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0906 18:30:18.615892 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:18.619170 13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0906 18:30:18.619198 13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0906 18:30:18.623393 13823 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0906 18:30:18.623412 13823 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0906 18:30:18.632558 13823 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0906 18:30:18.632587 13823 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0906 18:30:18.642554 13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0906 18:30:18.642577 13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0906 18:30:18.646434 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0906 18:30:18.712949 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0906 18:30:18.744354 13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0906 18:30:18.744376 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0906 18:30:18.745893 13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0906 18:30:18.745909 13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0906 18:30:18.758057 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0906 18:30:18.794329 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0906 18:30:18.794351 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0906 18:30:18.810523 13823 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0906 18:30:18.810541 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0906 18:30:18.819725 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0906 18:30:18.820412 13823 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0906 18:30:18.820430 13823 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0906 18:30:18.870635 13823 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0906 18:30:18.870657 13823 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0906 18:30:18.955167 13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0906 18:30:18.955193 13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0906 18:30:19.024347 13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0906 18:30:19.024371 13823 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0906 18:30:19.036090 13823 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0906 18:30:19.036117 13823 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0906 18:30:19.061575 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0906 18:30:19.061599 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0906 18:30:19.063347 13823 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0906 18:30:19.063362 13823 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0906 18:30:19.071318 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0906 18:30:19.185778 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0906 18:30:19.185801 13823 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0906 18:30:19.198921 13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0906 18:30:19.198940 13823 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0906 18:30:19.225401 13823 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0906 18:30:19.225422 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0906 18:30:19.250965 13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0906 18:30:19.250991 13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0906 18:30:19.295032 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0906 18:30:19.295064 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0906 18:30:19.560881 13823 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:19.560903 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0906 18:30:19.605732 13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0906 18:30:19.605761 13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0906 18:30:19.605857 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0906 18:30:19.639600 13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0906 18:30:19.639626 13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0906 18:30:19.651766 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0906 18:30:19.815029 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:19.831850 13823 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0906 18:30:19.831883 13823 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0906 18:30:19.953978 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0906 18:30:19.953997 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0906 18:30:20.091151 13823 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0906 18:30:20.091171 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0906 18:30:20.208365 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0906 18:30:20.208395 13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0906 18:30:20.322907 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0906 18:30:20.592180 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0906 18:30:20.592203 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0906 18:30:20.866215 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0906 18:30:20.866237 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0906 18:30:21.296320 13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0906 18:30:21.296345 13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0906 18:30:21.533570 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0906 18:30:23.237459 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.774672195s)
I0906 18:30:23.237524 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.237547 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.237911 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.237986 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.238006 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.238024 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.238036 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.238294 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.238313 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.751842 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.253438201s)
I0906 18:30:23.751900 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.751914 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.751912 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.18956267s)
I0906 18:30:23.751952 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.751967 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.752014 13823 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.160216467s)
I0906 18:30:23.752042 13823 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.159701916s)
I0906 18:30:23.752057 13823 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0906 18:30:23.752091 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.136171256s)
I0906 18:30:23.752131 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.752144 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.752372 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.752387 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.752396 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.752402 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.752419 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.752432 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.752442 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.752445 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.752450 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.752518 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.752555 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.752587 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.752603 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.752619 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.752674 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.752715 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.752737 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.752746 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.753079 13823 node_ready.go:35] waiting up to 6m0s for node "addons-959832" to be "Ready" ...
I0906 18:30:23.753223 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.753238 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.753335 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.753364 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.753380 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.817790 13823 node_ready.go:49] node "addons-959832" has status "Ready":"True"
I0906 18:30:23.817814 13823 node_ready.go:38] duration metric: took 64.714897ms for node "addons-959832" to be "Ready" ...
I0906 18:30:23.817823 13823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0906 18:30:23.864694 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.864718 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.864768 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:23.864803 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:23.865089 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.865109 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:23.865155 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:23.865189 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:23.865203 13823 main.go:141] libmachine: Making call to close connection to plugin binary
W0906 18:30:23.865293 13823 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0906 18:30:23.895688 13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
I0906 18:30:24.386851 13823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959832" context rescaled to 1 replicas
I0906 18:30:24.986957 13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0906 18:30:24.987010 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:24.990148 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:24.990559 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:24.990592 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:24.990724 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:24.990958 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:24.991131 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:24.991298 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:25.501366 13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0906 18:30:25.593869 13823 addons.go:234] Setting addon gcp-auth=true in "addons-959832"
I0906 18:30:25.593929 13823 host.go:66] Checking if "addons-959832" exists ...
I0906 18:30:25.594221 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:25.594261 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:25.609081 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
I0906 18:30:25.609512 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:25.609995 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:25.610010 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:25.610361 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:25.610997 13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:30:25.611034 13823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:30:25.625831 13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
I0906 18:30:25.626278 13823 main.go:141] libmachine: () Calling .GetVersion
I0906 18:30:25.626760 13823 main.go:141] libmachine: Using API Version 1
I0906 18:30:25.626788 13823 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:30:25.627170 13823 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:30:25.627386 13823 main.go:141] libmachine: (addons-959832) Calling .GetState
I0906 18:30:25.629014 13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
I0906 18:30:25.629236 13823 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0906 18:30:25.629259 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
I0906 18:30:25.631653 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:25.632049 13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
I0906 18:30:25.632077 13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
I0906 18:30:25.632216 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
I0906 18:30:25.632399 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
I0906 18:30:25.632555 13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
I0906 18:30:25.632700 13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
I0906 18:30:25.941079 13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:27.481753 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.835292795s)
I0906 18:30:27.481764 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.768781047s)
I0906 18:30:27.481804 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481809 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.723718351s)
I0906 18:30:27.481827 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481815 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.481841 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481846 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.481854 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.481864 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.662110283s)
I0906 18:30:27.481888 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481903 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.481917 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.410575966s)
I0906 18:30:27.481932 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481941 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.481953 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.876072516s)
I0906 18:30:27.481973 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.481985 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482084 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.830290669s)
I0906 18:30:27.482101 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482111 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482256 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.667196336s)
I0906 18:30:27.482281 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
W0906 18:30:27.482296 13823 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0906 18:30:27.482317 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.482323 13823 retry.go:31] will retry after 254.362145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0906 18:30:27.482304 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.482348 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482355 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482362 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.482365 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482369 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482372 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482374 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482381 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482386 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482391 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482395 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482402 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482411 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482419 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482426 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482399 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.159419479s)
I0906 18:30:27.482444 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482451 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482456 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482461 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482466 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.482475 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482891 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.482928 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.482936 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.482392 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.482433 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.484341 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.484358 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.484374 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.484397 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.484405 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.484413 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.484420 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.484462 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.484469 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.484477 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.484484 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.485863 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.485876 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.485887 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.485896 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.485904 13823 addons.go:475] Verifying addon metrics-server=true in "addons-959832"
I0906 18:30:27.485927 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.485930 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.485938 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.485943 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.485946 13823 addons.go:475] Verifying addon ingress=true in "addons-959832"
I0906 18:30:27.485950 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.485997 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.486046 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.486077 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.486084 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.485864 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.486513 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.486554 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.486562 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.487477 13823 out.go:177] * Verifying ingress addon...
I0906 18:30:27.487573 13823 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-959832 service yakd-dashboard -n yakd-dashboard
I0906 18:30:27.486024 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.487691 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.487717 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:27.487728 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:27.487937 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:27.487952 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:27.487960 13823 addons.go:475] Verifying addon registry=true in "addons-959832"
I0906 18:30:27.487962 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:27.489109 13823 out.go:177] * Verifying registry addon...
I0906 18:30:27.490025 13823 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0906 18:30:27.490703 13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0906 18:30:27.494994 13823 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0906 18:30:27.495014 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:27.495422 13823 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0906 18:30:27.495442 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:27.737115 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:27.995783 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:27.996316 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:28.405776 13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:28.525889 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:28.526140 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:29.000232 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:29.000400 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:29.288925 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.755298783s)
I0906 18:30:29.288949 13823 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.659689548s)
I0906 18:30:29.288969 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:29.288980 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:29.289345 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:29.289363 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:29.289373 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:29.289381 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:29.289348 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:29.289643 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:29.289659 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:29.289670 13823 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-959832"
I0906 18:30:29.290527 13823 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0906 18:30:29.291464 13823 out.go:177] * Verifying csi-hostpath-driver addon...
I0906 18:30:29.293133 13823 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0906 18:30:29.293804 13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0906 18:30:29.294483 13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0906 18:30:29.294501 13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0906 18:30:29.307557 13823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0906 18:30:29.307575 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:29.501347 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:29.502636 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:29.549399 13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0906 18:30:29.549424 13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0906 18:30:29.631326 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.894156301s)
I0906 18:30:29.631395 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:29.631409 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:29.631783 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:29.631805 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:29.631809 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:29.631815 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:29.631831 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:29.632053 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:29.632067 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:29.711353 13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0906 18:30:29.711373 13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0906 18:30:29.758533 13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0906 18:30:29.798367 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:29.994829 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:29.995464 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:30.298814 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:30.494755 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:30.495217 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:30.800377 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:30.927844 13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:31.011246 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:31.011996 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:31.259074 13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.500495277s)
I0906 18:30:31.259136 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:31.259150 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:31.259463 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:31.259567 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:31.259547 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:31.259579 13823 main.go:141] libmachine: Making call to close driver server
I0906 18:30:31.259614 13823 main.go:141] libmachine: (addons-959832) Calling .Close
I0906 18:30:31.259913 13823 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:30:31.259930 13823 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:30:31.259955 13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
I0906 18:30:31.261909 13823 addons.go:475] Verifying addon gcp-auth=true in "addons-959832"
I0906 18:30:31.263787 13823 out.go:177] * Verifying gcp-auth addon...
I0906 18:30:31.265893 13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0906 18:30:31.298469 13823 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0906 18:30:31.298489 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:31.300480 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:31.497017 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:31.497257 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:31.769388 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:31.798048 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:31.995495 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:31.995656 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:32.269836 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:32.298842 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:32.495206 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:32.496478 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:32.769455 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:32.798535 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:32.905084 13823 pod_ready.go:98] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.98 HostIPs:[{IP:192.168.39.
98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000b9f540}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0906 18:30:32.905113 13823 pod_ready.go:82] duration metric: took 9.009398679s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
E0906 18:30:32.905127 13823 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.98 HostIPs:[{IP:192.168.39.98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc000b9f540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0906 18:30:32.905141 13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.911075 13823 pod_ready.go:93] pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:32.911105 13823 pod_ready.go:82] duration metric: took 5.954486ms for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.911119 13823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.928213 13823 pod_ready.go:93] pod "etcd-addons-959832" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:32.928234 13823 pod_ready.go:82] duration metric: took 17.107089ms for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.928244 13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.942443 13823 pod_ready.go:93] pod "kube-apiserver-addons-959832" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:32.942474 13823 pod_ready.go:82] duration metric: took 14.222157ms for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.942489 13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.948544 13823 pod_ready.go:93] pod "kube-controller-manager-addons-959832" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:32.948568 13823 pod_ready.go:82] duration metric: took 6.069443ms for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.948594 13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
I0906 18:30:32.995554 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:32.996027 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:33.270077 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:33.300133 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:33.300322 13823 pod_ready.go:93] pod "kube-proxy-df5wg" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:33.300343 13823 pod_ready.go:82] duration metric: took 351.740369ms for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
I0906 18:30:33.300356 13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:33.494781 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:33.495847 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:33.701424 13823 pod_ready.go:93] pod "kube-scheduler-addons-959832" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:33.701467 13823 pod_ready.go:82] duration metric: took 401.098684ms for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
I0906 18:30:33.701495 13823 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
I0906 18:30:33.769360 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:33.798021 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:33.995683 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:33.997103 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:34.270015 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:34.299221 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:34.495406 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:34.496126 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:34.770094 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:34.799237 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:34.996508 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:34.997585 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:35.270568 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:35.299394 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:35.495141 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:35.495320 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:35.707531 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:35.770986 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:35.800293 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:35.996725 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:35.997639 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:36.270981 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:36.303214 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:36.494976 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:36.496783 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:36.771081 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:36.799874 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:36.995676 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:36.996010 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:37.270120 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:37.299046 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:37.494705 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:37.496067 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:37.707603 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:37.769678 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:37.798583 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:37.995037 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:37.995885 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:38.269217 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:38.298643 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:38.495448 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:38.495856 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:38.769730 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:38.799711 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:39.083640 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:39.083787 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:39.496519 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:39.496908 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:39.497701 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:39.499783 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:39.769883 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:39.798544 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:39.994338 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:39.995398 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:40.209006 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:40.272568 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:40.301397 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:40.498136 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:40.498526 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:40.770814 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:40.798522 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:40.994052 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:40.995394 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:41.270657 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:41.298770 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:41.498318 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:41.498596 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:41.770854 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:41.799666 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:41.995027 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:41.995612 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:42.270017 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:42.299094 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:42.592984 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:42.595535 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:42.721960 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:42.772381 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:42.799751 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:42.995172 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:42.995508 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:43.272873 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:43.298467 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:43.494939 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:43.495402 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:43.769785 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:43.798713 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:43.996443 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:43.996744 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:44.269175 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:44.308002 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:44.494478 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:44.494986 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:44.770210 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:44.797768 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:44.995782 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:44.997472 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:45.207350 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:45.269487 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:45.298388 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:45.494409 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:45.494479 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:45.769970 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:45.798375 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:45.995583 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:45.995736 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:46.269632 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:46.299154 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:46.495331 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:46.495578 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:46.769857 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:46.799172 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:46.995967 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:46.996352 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:47.207412 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:47.270222 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:47.300058 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:47.501228 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:47.501496 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:47.769887 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:47.798711 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:47.994453 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:47.994618 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:48.270499 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:48.298587 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:48.494874 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:48.494941 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:48.771487 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:48.799341 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:48.995078 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:48.995997 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:49.270055 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:49.297759 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:49.493704 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:49.496397 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:49.707766 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:49.769942 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:49.799020 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:49.994521 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:49.995871 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:50.269405 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:50.298442 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:50.495620 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:50.496486 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:50.876382 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:50.877156 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:50.996700 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:50.996938 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:51.269377 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:51.298953 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:51.495015 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:51.495481 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:51.708764 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:51.770620 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:51.798067 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:51.994702 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:51.995528 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:52.269440 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:52.298688 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:52.496129 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:52.497284 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:52.769844 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:52.799404 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:52.995549 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:52.995828 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:53.272511 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:53.299182 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:53.495690 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:53.498212 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:53.769884 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:53.799759 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:53.994840 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:53.994970 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:54.208168 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:54.270994 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:54.301366 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:54.494638 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:54.495314 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:54.769283 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:54.797866 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:55.272696 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:55.272743 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:55.272998 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:55.298147 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:55.495547 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:55.495711 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:55.770496 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:55.802302 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:55.995386 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:55.995623 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:56.268801 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:56.298461 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:56.494963 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:56.495882 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:57.291534 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:57.291868 13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:57.292073 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:57.292099 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:57.293348 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:57.309051 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:57.309858 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:57.312884 13823 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:57.312900 13823 pod_ready.go:82] duration metric: took 23.611395425s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
I0906 18:30:57.312922 13823 pod_ready.go:39] duration metric: took 33.495084445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0906 18:30:57.312943 13823 api_server.go:52] waiting for apiserver process to appear ...
I0906 18:30:57.312998 13823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:57.342569 13823 api_server.go:72] duration metric: took 39.503199537s to wait for apiserver process to appear ...
I0906 18:30:57.342597 13823 api_server.go:88] waiting for apiserver healthz status ...
I0906 18:30:57.342618 13823 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
I0906 18:30:57.347032 13823 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
ok
I0906 18:30:57.348263 13823 api_server.go:141] control plane version: v1.31.0
I0906 18:30:57.348287 13823 api_server.go:131] duration metric: took 5.682402ms to wait for apiserver health ...
I0906 18:30:57.348297 13823 system_pods.go:43] waiting for kube-system pods to appear ...
I0906 18:30:57.359723 13823 system_pods.go:59] 18 kube-system pods found
I0906 18:30:57.359757 13823 system_pods.go:61] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
I0906 18:30:57.359769 13823 system_pods.go:61] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0906 18:30:57.359778 13823 system_pods.go:61] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0906 18:30:57.359790 13823 system_pods.go:61] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0906 18:30:57.359800 13823 system_pods.go:61] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
I0906 18:30:57.359806 13823 system_pods.go:61] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
I0906 18:30:57.359815 13823 system_pods.go:61] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
I0906 18:30:57.359820 13823 system_pods.go:61] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
I0906 18:30:57.359826 13823 system_pods.go:61] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
I0906 18:30:57.359829 13823 system_pods.go:61] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
I0906 18:30:57.359834 13823 system_pods.go:61] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0906 18:30:57.359840 13823 system_pods.go:61] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
I0906 18:30:57.359846 13823 system_pods.go:61] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0906 18:30:57.359852 13823 system_pods.go:61] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0906 18:30:57.359858 13823 system_pods.go:61] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:57.359867 13823 system_pods.go:61] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:57.359871 13823 system_pods.go:61] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
I0906 18:30:57.359877 13823 system_pods.go:61] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
I0906 18:30:57.359885 13823 system_pods.go:74] duration metric: took 11.581782ms to wait for pod list to return data ...
I0906 18:30:57.359894 13823 default_sa.go:34] waiting for default service account to be created ...
I0906 18:30:57.364154 13823 default_sa.go:45] found service account: "default"
I0906 18:30:57.364173 13823 default_sa.go:55] duration metric: took 4.273217ms for default service account to be created ...
I0906 18:30:57.364181 13823 system_pods.go:116] waiting for k8s-apps to be running ...
I0906 18:30:57.373118 13823 system_pods.go:86] 18 kube-system pods found
I0906 18:30:57.373150 13823 system_pods.go:89] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
I0906 18:30:57.373165 13823 system_pods.go:89] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0906 18:30:57.373175 13823 system_pods.go:89] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0906 18:30:57.373194 13823 system_pods.go:89] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0906 18:30:57.373202 13823 system_pods.go:89] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
I0906 18:30:57.373217 13823 system_pods.go:89] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
I0906 18:30:57.373223 13823 system_pods.go:89] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
I0906 18:30:57.373227 13823 system_pods.go:89] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
I0906 18:30:57.373230 13823 system_pods.go:89] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
I0906 18:30:57.373237 13823 system_pods.go:89] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
I0906 18:30:57.373242 13823 system_pods.go:89] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0906 18:30:57.373246 13823 system_pods.go:89] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
I0906 18:30:57.373252 13823 system_pods.go:89] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0906 18:30:57.373257 13823 system_pods.go:89] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0906 18:30:57.373264 13823 system_pods.go:89] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:57.373273 13823 system_pods.go:89] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:57.373280 13823 system_pods.go:89] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
I0906 18:30:57.373287 13823 system_pods.go:89] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
I0906 18:30:57.373299 13823 system_pods.go:126] duration metric: took 9.109597ms to wait for k8s-apps to be running ...
I0906 18:30:57.373309 13823 system_svc.go:44] waiting for kubelet service to be running ....
I0906 18:30:57.373355 13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0906 18:30:57.425478 13823 system_svc.go:56] duration metric: took 52.162346ms WaitForService to wait for kubelet
I0906 18:30:57.425503 13823 kubeadm.go:582] duration metric: took 39.586136805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0906 18:30:57.425533 13823 node_conditions.go:102] verifying NodePressure condition ...
I0906 18:30:57.428818 13823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0906 18:30:57.428842 13823 node_conditions.go:123] node cpu capacity is 2
I0906 18:30:57.428863 13823 node_conditions.go:105] duration metric: took 3.314164ms to run NodePressure ...
I0906 18:30:57.428878 13823 start.go:241] waiting for startup goroutines ...
I0906 18:30:57.495273 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:57.495869 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:57.769593 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:57.798564 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:57.995122 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:57.995468 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:58.270153 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:58.299032 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:58.495028 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:58.495638 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:58.770199 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:58.797952 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:58.994635 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:58.995409 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:59.269612 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:59.298532 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:59.494666 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:59.495202 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:30:59.769637 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:30:59.799716 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:59.995110 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:59.997059 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:00.269925 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:00.299168 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:00.495168 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:00.495452 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:00.769831 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:00.798879 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:00.994356 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:00.995338 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:01.270323 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:01.298809 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:01.497749 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:01.509994 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:02.196171 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:02.197232 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:02.197446 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:02.198219 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:02.269772 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:02.299913 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:02.495441 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:02.496083 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:02.770038 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:02.800728 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:02.995143 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:02.995393 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:03.269175 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:03.298453 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:03.495672 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:03.495941 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:03.769214 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:03.798100 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:03.996193 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:03.996547 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:04.270229 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:04.300339 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:04.495048 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:04.495208 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:04.769698 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:04.798488 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:05.000395 13823 kapi.go:107] duration metric: took 37.509684094s to wait for kubernetes.io/minikube-addons=registry ...
I0906 18:31:05.000674 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:05.270104 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:05.297638 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:05.495343 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:05.770543 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:05.800954 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:05.994937 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:06.270489 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:06.299401 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:06.495523 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:06.775824 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:06.804605 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:07.000907 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:07.281094 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:07.306915 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:07.818623 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:07.820944 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:07.821122 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:07.994968 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:08.269992 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:08.298837 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:08.493945 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:08.769482 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:08.798377 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:08.994691 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:09.269835 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:09.299230 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:09.502957 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:09.769997 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:09.798765 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:10.127650 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:10.275919 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:10.300104 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:10.495617 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:10.769823 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:10.798656 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:10.995288 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:11.270073 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:11.299546 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:11.494131 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:11.771059 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:11.799920 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:11.995856 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:12.274737 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:12.299392 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:12.494262 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:12.769625 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:12.798619 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:12.995358 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:13.316812 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:13.317852 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:13.495815 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:13.769181 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:13.799259 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:13.995199 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:14.276613 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:14.379012 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:14.494898 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:14.770331 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:14.798773 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:14.995445 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:15.272540 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:15.301141 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:15.495285 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:15.770353 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:15.798730 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:15.994520 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:16.270657 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:16.300620 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:16.494263 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:16.770371 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:16.799256 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:16.994749 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:17.269747 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:17.298951 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:17.494719 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:17.769832 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:17.799470 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:17.994977 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:18.269720 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:18.310969 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:18.494867 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:18.769348 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:18.798225 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:18.994850 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:19.282829 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:19.384038 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:19.497045 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:19.770599 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:19.801611 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:19.996550 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:20.270037 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:20.311775 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:20.498768 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:20.769965 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:20.799204 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:20.997161 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:21.270035 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:21.299010 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:21.494660 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:21.769290 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:21.798619 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:21.994674 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:22.269883 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:22.300295 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:22.496723 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:22.771097 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:22.799152 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:23.013066 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:23.270485 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:23.299028 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:23.496372 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:23.770017 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:23.801362 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:23.996357 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:24.270445 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:24.299776 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:24.494072 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:25.030314 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:25.030783 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:25.031442 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:25.269910 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:25.371610 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:25.494715 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:25.770973 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:25.799735 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:25.994854 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:26.270976 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:26.299500 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:26.494510 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:26.770729 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:26.873976 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:26.993699 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:27.269916 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:27.299203 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:27.494353 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:27.771154 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:27.798428 13823 kapi.go:107] duration metric: took 58.504619679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0906 18:31:27.996381 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:28.271088 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:28.493970 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:28.769758 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:28.994788 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:29.271720 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:29.496574 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:29.770127 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:29.994752 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:30.464639 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:30.495124 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:30.770101 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:30.995408 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:31.270144 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:31.495730 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:31.769464 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:31.996345 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:32.269861 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:32.495930 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:32.768939 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:32.996483 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:33.269235 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:33.494459 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:33.769303 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:33.994740 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:34.270162 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:34.494209 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:34.772239 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:34.995450 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:35.270037 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:35.494858 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:35.770518 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:35.994084 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:36.270405 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:36.496230 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:36.770326 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:36.994330 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:37.270147 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:37.493620 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:37.778857 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:38.113592 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:38.270475 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:38.494284 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:38.769614 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:39.006516 13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0906 18:31:39.273731 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:39.495548 13823 kapi.go:107] duration metric: took 1m12.005524271s to wait for app.kubernetes.io/name=ingress-nginx ...
I0906 18:31:39.770852 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:40.269133 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:40.769688 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:41.270179 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:41.769459 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:42.270714 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:42.770252 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:43.270294 13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:31:43.770209 13823 kapi.go:107] duration metric: took 1m12.504314576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0906 18:31:43.771902 13823 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-959832 cluster.
I0906 18:31:43.773493 13823 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0906 18:31:43.774994 13823 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0906 18:31:43.776439 13823 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0906 18:31:43.778228 13823 addons.go:510] duration metric: took 1m25.938813235s for enable addons: enabled=[storage-provisioner ingress-dns default-storageclass nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0906 18:31:43.778280 13823 start.go:246] waiting for cluster config update ...
I0906 18:31:43.778303 13823 start.go:255] writing updated cluster config ...
I0906 18:31:43.778560 13823 ssh_runner.go:195] Run: rm -f paused
I0906 18:31:43.828681 13823 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0906 18:31:43.830792 13823 out.go:177] * Done! kubectl is now configured to use "addons-959832" cluster and "default" namespace by default
==> CRI-O <==
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.477681804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059477605049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c0297f3-fe00-405e-9ba7-b7639a9f70e1 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.478519982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.478623655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.479015594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.512966847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eb61c96-0409-4aa9-b9ac-aa3f16b7fc79 name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.513059301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eb61c96-0409-4aa9-b9ac-aa3f16b7fc79 name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.513970984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bd424b9-9df4-40ff-bdb0-d2af34377049 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515153140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059515121423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bd424b9-9df4-40ff-bdb0-d2af34377049 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515690188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515745740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.516591018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.569539729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd75c494-b8e8-41a9-a9f0-4f4a5108f674 name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.569629524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd75c494-b8e8-41a9-a9f0-4f4a5108f674 name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.570645034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685e3c2d-fb87-4633-8f49-c6cb353f8ba8 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.571745517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059571718376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685e3c2d-fb87-4633-8f49-c6cb353f8ba8 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572415648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572561425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572983807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.606084604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bb0d2a1-f8b0-4793-9ac6-a6ed250e72fc name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.606170783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bb0d2a1-f8b0-4793-9ac6-a6ed250e72fc name=/runtime.v1.RuntimeService/Version
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.607185958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57c344d6-c86b-4ebe-993e-399dabdc4573 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.613546938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059613514131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57c344d6-c86b-4ebe-993e-399dabdc4573 name=/runtime.v1.ImageService/ImageFsInfo
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.614546038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.614608199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.615331637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
751a19a588218 a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824 15 seconds ago Exited helper-pod 0 9eff610caae62 helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9
0033d69fcbd8e docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa 18 seconds ago Exited busybox 0 8b1ac3c44a795 test-local-path
64bc8797628e8 docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee 21 seconds ago Exited helper-pod 0 9177865f139ac helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9
47ff4cd5a2010 docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0 44 seconds ago Running nginx 0 e9d551110687a nginx
bff22acf8afe6 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b 9 minutes ago Running gcp-auth 0 6009e3b23d6b9 gcp-auth-89d5ffd79-wbp4z
2f6f132825107 registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6 9 minutes ago Running controller 0 8b8b62d5172cf ingress-nginx-controller-bc57996ff-5z4xh
b9dae7d0e5426 ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242 9 minutes ago Exited patch 2 09518ced68465 ingress-nginx-admission-patch-h6cwj
f1aec73f0b154 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012 9 minutes ago Exited create 0 d305340c16851 ingress-nginx-admission-create-gbh5k
dbdca73cd5f41 registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a 9 minutes ago Running metrics-server 0 ebd17a7bfd07d metrics-server-84c5f94fbc-flnx5
d8e6b5740dfd9 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 10 minutes ago Running local-path-provisioner 0 bb57b9b0a87b0 local-path-provisioner-86d989889c-wmllc
86606ac7f428d nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 10 minutes ago Running nvidia-device-plugin-ctr 0 41aeff34f5a9c nvidia-device-plugin-daemonset-nsxpz
9b38efef5174e gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 10 minutes ago Running cloud-spanner-emulator 0 94957bf19e8b1 cloud-spanner-emulator-769b77f747-zh76q
3be35f5c5847b gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 10 minutes ago Running minikube-ingress-dns 0 2131ffc93d2db kube-ingress-dns-minikube
095caffa96df4 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 10 minutes ago Running storage-provisioner 0 fb03fe115a315 storage-provisioner
daf771eda93ba cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 10 minutes ago Running coredns 0 cf16f9b0ce0a6 coredns-6f6b679f8f-d5d26
f62f176bebb98 ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494 10 minutes ago Running kube-proxy 0 a16d4e27651e7 kube-proxy-df5wg
0976f654c6450 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1 10 minutes ago Running kube-controller-manager 0 08d02ee1f1b83 kube-controller-manager-addons-959832
0062bd6dff511 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94 10 minutes ago Running kube-scheduler 0 3810e200d7f2c kube-scheduler-addons-959832
14011f30e4b49 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4 10 minutes ago Running etcd 0 1340e66e90fd2 etcd-addons-959832
f03b3137e10ab 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3 10 minutes ago Running kube-apiserver 0 6a4a01ed6ac27 kube-apiserver-addons-959832
==> coredns [daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025] <==
[INFO] 10.244.0.8:53109 - 30493 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031299s
[INFO] 10.244.0.8:51164 - 21323 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073777s
[INFO] 10.244.0.8:51164 - 9807 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003634s
[INFO] 10.244.0.8:33912 - 61080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030797s
[INFO] 10.244.0.8:33912 - 53146 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000256s
[INFO] 10.244.0.8:51671 - 8759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027086s
[INFO] 10.244.0.8:51671 - 2357 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069078s
[INFO] 10.244.0.8:58937 - 47939 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029815s
[INFO] 10.244.0.8:58937 - 55677 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000025038s
[INFO] 10.244.0.8:59574 - 33097 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055434s
[INFO] 10.244.0.8:59574 - 49222 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032883s
[INFO] 10.244.0.8:34345 - 33033 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025905s
[INFO] 10.244.0.8:34345 - 61711 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025782s
[INFO] 10.244.0.8:40854 - 19935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024436s
[INFO] 10.244.0.8:40854 - 16861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022079s
[INFO] 10.244.0.8:54975 - 41823 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033452s
[INFO] 10.244.0.8:54975 - 6745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041358s
[INFO] 10.244.0.22:39608 - 5840 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000623407s
[INFO] 10.244.0.22:47451 - 10373 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000773196s
[INFO] 10.244.0.22:47147 - 43920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096203s
[INFO] 10.244.0.22:37201 - 19027 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000052062s
[INFO] 10.244.0.22:51583 - 38377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070102s
[INFO] 10.244.0.22:37854 - 16491 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049501s
[INFO] 10.244.0.22:55914 - 7247 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000846443s
[INFO] 10.244.0.22:51764 - 46657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001169257s
==> describe nodes <==
Name: addons-959832
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-959832
kubernetes.io/os=linux
minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
minikube.k8s.io/name=addons-959832
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-959832
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 06 Sep 2024 18:30:10 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-959832
AcquireTime: <unset>
RenewTime: Fri, 06 Sep 2024 18:40:58 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 06 Sep 2024 18:40:46 +0000 Fri, 06 Sep 2024 18:30:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 06 Sep 2024 18:40:46 +0000 Fri, 06 Sep 2024 18:30:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 06 Sep 2024 18:40:46 +0000 Fri, 06 Sep 2024 18:30:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 06 Sep 2024 18:40:46 +0000 Fri, 06 Sep 2024 18:30:14 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.98
Hostname: addons-959832
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 789fcfcd81af4b61a593ac3d592db28c
System UUID: 789fcfcd-81af-4b61-a593-ac3d592db28c
Boot ID: ca224247-03d2-489f-a0b8-0a2fbb84d9da
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m15s
default cloud-spanner-emulator-769b77f747-zh76q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47s
gcp-auth gcp-auth-89d5ffd79-wbp4z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
ingress-nginx ingress-nginx-controller-bc57996ff-5z4xh 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 10m
kube-system coredns-6f6b679f8f-d5d26 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 10m
kube-system etcd-addons-959832 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 10m
kube-system kube-apiserver-addons-959832 250m (12%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-controller-manager-addons-959832 200m (10%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-proxy-df5wg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system kube-scheduler-addons-959832 100m (5%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system metrics-server-84c5f94fbc-flnx5 100m (5%) 0 (0%) 200Mi (5%) 0 (0%) 10m
kube-system nvidia-device-plugin-daemonset-nsxpz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
local-path-storage local-path-provisioner-86d989889c-wmllc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (12%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 10m kube-proxy
Normal Starting 10m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 10m kubelet Node addons-959832 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 10m kubelet Node addons-959832 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10m kubelet Node addons-959832 status is now: NodeHasSufficientPID
Normal NodeReady 10m kubelet Node addons-959832 status is now: NodeReady
Normal RegisteredNode 10m node-controller Node addons-959832 event: Registered Node addons-959832 in Controller
==> dmesg <==
[ +5.073317] kauditd_printk_skb: 128 callbacks suppressed
[ +5.127265] kauditd_printk_skb: 76 callbacks suppressed
[ +6.787602] kauditd_printk_skb: 21 callbacks suppressed
[ +6.808461] kauditd_printk_skb: 34 callbacks suppressed
[Sep 6 18:31] kauditd_printk_skb: 9 callbacks suppressed
[ +5.023954] kauditd_printk_skb: 23 callbacks suppressed
[ +5.411470] kauditd_printk_skb: 60 callbacks suppressed
[ +6.032630] kauditd_printk_skb: 37 callbacks suppressed
[ +5.000760] kauditd_printk_skb: 9 callbacks suppressed
[ +5.371405] kauditd_printk_skb: 10 callbacks suppressed
[ +5.464629] kauditd_printk_skb: 42 callbacks suppressed
[ +9.171733] kauditd_printk_skb: 9 callbacks suppressed
[Sep 6 18:32] kauditd_printk_skb: 30 callbacks suppressed
[Sep 6 18:34] kauditd_printk_skb: 28 callbacks suppressed
[Sep 6 18:37] kauditd_printk_skb: 28 callbacks suppressed
[Sep 6 18:39] kauditd_printk_skb: 28 callbacks suppressed
[Sep 6 18:40] kauditd_printk_skb: 4 callbacks suppressed
[ +5.061671] kauditd_printk_skb: 9 callbacks suppressed
[ +5.069446] kauditd_printk_skb: 25 callbacks suppressed
[ +8.609090] kauditd_printk_skb: 19 callbacks suppressed
[ +6.878882] kauditd_printk_skb: 7 callbacks suppressed
[ +8.370924] kauditd_printk_skb: 33 callbacks suppressed
[ +5.422494] kauditd_printk_skb: 43 callbacks suppressed
[ +5.580656] kauditd_printk_skb: 26 callbacks suppressed
[ +10.557034] kauditd_printk_skb: 4 callbacks suppressed
==> etcd [14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d] <==
{"level":"info","ts":"2024-09-06T18:31:25.014738Z","caller":"traceutil/trace.go:171","msg":"trace[827224904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"258.225918ms","start":"2024-09-06T18:31:24.756506Z","end":"2024-09-06T18:31:25.014732Z","steps":["trace[827224904] 'range keys from in-memory index tree' (duration: 258.150808ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-06T18:31:25.014813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.47361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-06T18:31:25.014826Z","caller":"traceutil/trace.go:171","msg":"trace[654326182] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"230.487781ms","start":"2024-09-06T18:31:24.784334Z","end":"2024-09-06T18:31:25.014822Z","steps":["trace[654326182] 'range keys from in-memory index tree' (duration: 230.413178ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:31:30.448893Z","caller":"traceutil/trace.go:171","msg":"trace[336175103] linearizableReadLoop","detail":"{readStateIndex:1140; appliedIndex:1139; }","duration":"193.667597ms","start":"2024-09-06T18:31:30.255210Z","end":"2024-09-06T18:31:30.448878Z","steps":["trace[336175103] 'read index received' (duration: 193.5747ms)","trace[336175103] 'applied index is now lower than readState.Index' (duration: 92.313µs)"],"step_count":2}
{"level":"info","ts":"2024-09-06T18:31:30.449052Z","caller":"traceutil/trace.go:171","msg":"trace[147865116] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"195.658402ms","start":"2024-09-06T18:31:30.253384Z","end":"2024-09-06T18:31:30.449042Z","steps":["trace[147865116] 'process raft request' (duration: 195.381086ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-06T18:31:30.449255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.027216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-06T18:31:30.449308Z","caller":"traceutil/trace.go:171","msg":"trace[1936020184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"194.091492ms","start":"2024-09-06T18:31:30.255208Z","end":"2024-09-06T18:31:30.449299Z","steps":["trace[1936020184] 'agreement among raft nodes before linearized reading' (duration: 194.016579ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:31:38.095195Z","caller":"traceutil/trace.go:171","msg":"trace[688394279] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1161; }","duration":"115.853572ms","start":"2024-09-06T18:31:37.979325Z","end":"2024-09-06T18:31:38.095179Z","steps":["trace[688394279] 'read index received' (duration: 115.687137ms)","trace[688394279] 'applied index is now lower than readState.Index' (duration: 165.625µs)"],"step_count":2}
{"level":"warn","ts":"2024-09-06T18:31:38.095479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.064057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-06T18:31:38.095541Z","caller":"traceutil/trace.go:171","msg":"trace[1813618553] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"116.211558ms","start":"2024-09-06T18:31:37.979321Z","end":"2024-09-06T18:31:38.095532Z","steps":["trace[1813618553] 'agreement among raft nodes before linearized reading' (duration: 116.005384ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:31:38.095837Z","caller":"traceutil/trace.go:171","msg":"trace[2080125568] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"147.639748ms","start":"2024-09-06T18:31:37.948183Z","end":"2024-09-06T18:31:38.095822Z","steps":["trace[2080125568] 'process raft request' (duration: 146.880754ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:31:42.416683Z","caller":"traceutil/trace.go:171","msg":"trace[91810177] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"156.247568ms","start":"2024-09-06T18:31:42.260415Z","end":"2024-09-06T18:31:42.416663Z","steps":["trace[91810177] 'process raft request' (duration: 155.748211ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:40:07.229181Z","caller":"traceutil/trace.go:171","msg":"trace[484312089] linearizableReadLoop","detail":"{readStateIndex:2159; appliedIndex:2158; }","duration":"409.788256ms","start":"2024-09-06T18:40:06.819346Z","end":"2024-09-06T18:40:07.229135Z","steps":["trace[484312089] 'read index received' (duration: 409.628912ms)","trace[484312089] 'applied index is now lower than readState.Index' (duration: 158.846µs)"],"step_count":2}
{"level":"info","ts":"2024-09-06T18:40:07.229379Z","caller":"traceutil/trace.go:171","msg":"trace[1656832041] transaction","detail":"{read_only:false; response_revision:2017; number_of_response:1; }","duration":"491.002048ms","start":"2024-09-06T18:40:06.738356Z","end":"2024-09-06T18:40:07.229358Z","steps":["trace[1656832041] 'process raft request' (duration: 490.652338ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-06T18:40:07.229604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.584673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-06T18:40:07.229643Z","caller":"traceutil/trace.go:171","msg":"trace[1915074209] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2017; }","duration":"248.626111ms","start":"2024-09-06T18:40:06.981009Z","end":"2024-09-06T18:40:07.229635Z","steps":["trace[1915074209] 'agreement among raft nodes before linearized reading' (duration: 248.574709ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-06T18:40:07.229740Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.738339Z","time spent":"491.264052ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-959832\" mod_revision:1958 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-959832\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-959832\" > >"}
{"level":"warn","ts":"2024-09-06T18:40:07.229558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.139686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2024-09-06T18:40:07.229900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.345839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
{"level":"info","ts":"2024-09-06T18:40:07.229941Z","caller":"traceutil/trace.go:171","msg":"trace[1213588532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2017; }","duration":"183.385298ms","start":"2024-09-06T18:40:07.046548Z","end":"2024-09-06T18:40:07.229933Z","steps":["trace[1213588532] 'agreement among raft nodes before linearized reading' (duration: 183.300185ms)"],"step_count":1}
{"level":"info","ts":"2024-09-06T18:40:07.229918Z","caller":"traceutil/trace.go:171","msg":"trace[1459748069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2017; }","duration":"410.570505ms","start":"2024-09-06T18:40:06.819339Z","end":"2024-09-06T18:40:07.229910Z","steps":["trace[1459748069] 'agreement among raft nodes before linearized reading' (duration: 410.06832ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-06T18:40:07.230002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.819307Z","time spent":"410.688119ms","remote":"127.0.0.1:39260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
{"level":"info","ts":"2024-09-06T18:40:09.281386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
{"level":"info","ts":"2024-09-06T18:40:09.333184Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"51.266331ms","hash":4192817885,"current-db-size-bytes":6647808,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3444736,"current-db-size-in-use":"3.4 MB"}
{"level":"info","ts":"2024-09-06T18:40:09.333251Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4192817885,"revision":1536,"compact-revision":-1}
==> gcp-auth [bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961] <==
2024/09/06 18:31:42 GCP Auth Webhook started!
2024/09/06 18:31:43 Ready to marshal response ...
2024/09/06 18:31:43 Ready to write response ...
2024/09/06 18:31:44 Ready to marshal response ...
2024/09/06 18:31:44 Ready to write response ...
2024/09/06 18:31:44 Ready to marshal response ...
2024/09/06 18:31:44 Ready to write response ...
2024/09/06 18:39:57 Ready to marshal response ...
2024/09/06 18:39:57 Ready to write response ...
2024/09/06 18:40:01 Ready to marshal response ...
2024/09/06 18:40:01 Ready to write response ...
2024/09/06 18:40:03 Ready to marshal response ...
2024/09/06 18:40:03 Ready to write response ...
2024/09/06 18:40:12 Ready to marshal response ...
2024/09/06 18:40:12 Ready to write response ...
2024/09/06 18:40:20 Ready to marshal response ...
2024/09/06 18:40:20 Ready to write response ...
2024/09/06 18:40:36 Ready to marshal response ...
2024/09/06 18:40:36 Ready to write response ...
2024/09/06 18:40:36 Ready to marshal response ...
2024/09/06 18:40:36 Ready to write response ...
2024/09/06 18:40:43 Ready to marshal response ...
2024/09/06 18:40:43 Ready to write response ...
==> kernel <==
18:40:59 up 11 min, 0 users, load average: 1.99, 1.15, 0.69
Linux addons-959832 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9] <==
E0906 18:32:14.711040 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
W0906 18:32:14.711528 1 handler_proxy.go:99] no RequestInfo found in the context
E0906 18:32:14.711932 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E0906 18:32:14.714123 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
E0906 18:32:14.719474 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
I0906 18:32:14.784984 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0906 18:39:53.218243 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0906 18:39:54.261305 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0906 18:40:11.987036 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0906 18:40:12.163983 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.110.216"}
I0906 18:40:13.051545 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0906 18:40:35.983222 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0906 18:40:35.983535 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0906 18:40:36.005118 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0906 18:40:36.005246 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0906 18:40:36.035687 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0906 18:40:36.035737 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0906 18:40:36.054186 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0906 18:40:36.054461 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0906 18:40:37.036569 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0906 18:40:37.057021 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0906 18:40:37.073802 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49] <==
W0906 18:40:40.482396 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:40.482556 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:40.938352 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:40.938413 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:41.124392 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:41.124488 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:45.723588 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:45.723666 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:46.118637 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:46.118749 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:46.223772 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:46.223806 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0906 18:40:46.739616 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-959832"
I0906 18:40:47.681013 1 shared_informer.go:313] Waiting for caches to sync for resource quota
I0906 18:40:47.681115 1 shared_informer.go:320] Caches are synced for resource quota
I0906 18:40:48.246986 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0906 18:40:48.247095 1 shared_informer.go:320] Caches are synced for garbage collector
I0906 18:40:49.916821 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.445µs"
W0906 18:40:52.477004 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:52.477049 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:52.616550 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:52.616601 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:40:54.489507 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:40:54.489648 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0906 18:40:58.454614 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.229µs"
==> kube-proxy [f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0906 18:30:20.895600 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0906 18:30:20.905684 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.98"]
E0906 18:30:20.905767 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0906 18:30:20.981385 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0906 18:30:20.981522 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0906 18:30:20.981552 1 server_linux.go:169] "Using iptables Proxier"
I0906 18:30:20.986309 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0906 18:30:20.986680 1 server.go:483] "Version info" version="v1.31.0"
I0906 18:30:20.986707 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0906 18:30:20.988245 1 config.go:197] "Starting service config controller"
I0906 18:30:20.988269 1 shared_informer.go:313] Waiting for caches to sync for service config
I0906 18:30:20.988299 1 config.go:104] "Starting endpoint slice config controller"
I0906 18:30:20.988303 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0906 18:30:20.988869 1 config.go:326] "Starting node config controller"
I0906 18:30:20.988881 1 shared_informer.go:313] Waiting for caches to sync for node config
I0906 18:30:21.089002 1 shared_informer.go:320] Caches are synced for node config
I0906 18:30:21.089043 1 shared_informer.go:320] Caches are synced for service config
I0906 18:30:21.089077 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832] <==
W0906 18:30:10.632826 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:10.632881 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0906 18:30:10.632992 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0906 18:30:10.633043 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:10.633145 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0906 18:30:10.633198 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0906 18:30:10.633303 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0906 18:30:10.633365 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.559856 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0906 18:30:11.559915 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.591626 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:11.591724 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.593014 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0906 18:30:11.593712 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.624825 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0906 18:30:11.625533 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.640090 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:11.640140 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.646831 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0906 18:30:11.646890 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0906 18:30:11.875922 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0906 18:30:11.876131 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:11.954173 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0906 18:30:11.954234 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0906 18:30:14.512534 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Sep 06 18:40:51 addons-959832 kubelet[1215]: I0906 18:40:51.346723 1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac00c1c-d26e-4f08-b91c-49baa60d8def" path="/var/lib/kubelet/pods/7ac00c1c-d26e-4f08-b91c-49baa60d8def/volumes"
Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.340174 1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"
Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.765407 1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648053764681386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.765652 1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648053764681386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.104841 1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds\") pod \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\" (UID: \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\") "
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.104905 1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9gwq\" (UniqueName: \"kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq\") pod \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\" (UID: \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\") "
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.105269 1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" (UID: "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.120998 1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq" (OuterVolumeSpecName: "kube-api-access-v9gwq") pod "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" (UID: "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"). InnerVolumeSpecName "kube-api-access-v9gwq". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.205694 1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v9gwq\" (UniqueName: \"kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq\") on node \"addons-959832\" DevicePath \"\""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.205735 1215 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds\") on node \"addons-959832\" DevicePath \"\""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.811545 1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7khw\" (UniqueName: \"kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw\") pod \"995000c4-356d-4aee-b8b4-6c719240ca26\" (UID: \"995000c4-356d-4aee-b8b4-6c719240ca26\") "
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.814539 1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw" (OuterVolumeSpecName: "kube-api-access-z7khw") pod "995000c4-356d-4aee-b8b4-6c719240ca26" (UID: "995000c4-356d-4aee-b8b4-6c719240ca26"). InnerVolumeSpecName "kube-api-access-z7khw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.912620 1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hjv\" (UniqueName: \"kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv\") pod \"8ea39930-6a75-4ad5-a074-233a2b95f98f\" (UID: \"8ea39930-6a75-4ad5-a074-233a2b95f98f\") "
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.912759 1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z7khw\" (UniqueName: \"kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw\") on node \"addons-959832\" DevicePath \"\""
Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.915027 1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv" (OuterVolumeSpecName: "kube-api-access-g8hjv") pod "8ea39930-6a75-4ad5-a074-233a2b95f98f" (UID: "8ea39930-6a75-4ad5-a074-233a2b95f98f"). InnerVolumeSpecName "kube-api-access-g8hjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.013788 1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g8hjv\" (UniqueName: \"kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv\") on node \"addons-959832\" DevicePath \"\""
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.344671 1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" path="/var/lib/kubelet/pods/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3/volumes"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.387322 1215 scope.go:117] "RemoveContainer" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.435531 1215 scope.go:117] "RemoveContainer" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
Sep 06 18:40:59 addons-959832 kubelet[1215]: E0906 18:40:59.436043 1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": container with ID starting with dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907 not found: ID does not exist" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.436091 1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"} err="failed to get container status \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": rpc error: code = NotFound desc = could not find container \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": container with ID starting with dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907 not found: ID does not exist"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.436116 1215 scope.go:117] "RemoveContainer" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.461988 1215 scope.go:117] "RemoveContainer" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
Sep 06 18:40:59 addons-959832 kubelet[1215]: E0906 18:40:59.462660 1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": container with ID starting with 4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28 not found: ID does not exist" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.462708 1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"} err="failed to get container status \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": rpc error: code = NotFound desc = could not find container \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": container with ID starting with 4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28 not found: ID does not exist"
==> storage-provisioner [095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120] <==
I0906 18:30:26.339092 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0906 18:30:26.364532 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0906 18:30:26.364614 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0906 18:30:26.389908 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0906 18:30:26.390911 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
I0906 18:30:26.391024 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c870ecaa-1488-487e-a063-0e518015e13e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3 became leader
I0906 18:30:26.492036 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959832 -n addons-959832
helpers_test.go:261: (dbg) Run: kubectl --context addons-959832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj: exit status 1 (73.07959ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: addons-959832/192.168.39.98
Start Time: Fri, 06 Sep 2024 18:31:44 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.23
IPs:
IP: 10.244.0.23
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8sxx (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-n8sxx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m16s default-scheduler Successfully assigned default/busybox to addons-959832
Normal Pulling 7m48s (x4 over 9m16s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m48s (x4 over 9m16s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning Failed 7m48s (x4 over 9m16s) kubelet Error: ErrImagePull
Warning Failed 7m37s (x6 over 9m15s) kubelet Error: ImagePullBackOff
Normal BackOff 4m15s (x20 over 9m15s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-gbh5k" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-h6cwj" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.13s)