=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.885793ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003824315s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003292519s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.084822959s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/23 10:32:54 [DEBUG] GET http://10.150.0.16:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:44303 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:20 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 23 Sep 24 10:20 UTC | 23 Sep 24 10:21 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:23 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 23 Sep 24 10:23 UTC | 23 Sep 24 10:23 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 23 Sep 24 10:32 UTC | 23 Sep 24 10:32 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 10:21:20
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 10:21:20.820039 14503 out.go:345] Setting OutFile to fd 1 ...
I0923 10:21:20.820260 14503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:21:20.820273 14503 out.go:358] Setting ErrFile to fd 2...
I0923 10:21:20.820279 14503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:21:20.820494 14503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3689/.minikube/bin
I0923 10:21:20.821111 14503 out.go:352] Setting JSON to false
I0923 10:21:20.821946 14503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":228,"bootTime":1727086653,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0923 10:21:20.822041 14503 start.go:139] virtualization: kvm guest
I0923 10:21:20.824455 14503 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0923 10:21:20.826064 14503 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3689/.minikube/cache/preloaded-tarball: no such file or directory
I0923 10:21:20.826081 14503 out.go:177] - MINIKUBE_LOCATION=19689
I0923 10:21:20.826099 14503 notify.go:220] Checking for updates...
I0923 10:21:20.828775 14503 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 10:21:20.830102 14503 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19689-3689/kubeconfig
I0923 10:21:20.831449 14503 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3689/.minikube
I0923 10:21:20.832760 14503 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0923 10:21:20.834126 14503 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 10:21:20.835492 14503 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 10:21:20.847200 14503 out.go:177] * Using the none driver based on user configuration
I0923 10:21:20.848385 14503 start.go:297] selected driver: none
I0923 10:21:20.848410 14503 start.go:901] validating driver "none" against <nil>
I0923 10:21:20.848424 14503 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 10:21:20.848473 14503 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0923 10:21:20.848761 14503 out.go:270] ! The 'none' driver does not respect the --memory flag
I0923 10:21:20.849338 14503 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 10:21:20.849554 14503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 10:21:20.849577 14503 cni.go:84] Creating CNI manager for ""
I0923 10:21:20.849623 14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:21:20.849632 14503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0923 10:21:20.849676 14503 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 10:21:20.851103 14503 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0923 10:21:20.852575 14503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json ...
I0923 10:21:20.852610 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json: {Name:mk91c6775a53b295bfcd832a0223bb0435d503a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:20.852737 14503 start.go:360] acquireMachinesLock for minikube: {Name:mk967f578fd3b876cb945ce54e006da4ee685f93 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 10:21:20.852767 14503 start.go:364] duration metric: took 16.983µs to acquireMachinesLock for "minikube"
I0923 10:21:20.852785 14503 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 10:21:20.852838 14503 start.go:125] createHost starting for "" (driver="none")
I0923 10:21:20.854284 14503 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0923 10:21:20.855416 14503 exec_runner.go:51] Run: systemctl --version
I0923 10:21:20.858054 14503 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0923 10:21:20.858086 14503 client.go:168] LocalClient.Create starting
I0923 10:21:20.858169 14503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca.pem
I0923 10:21:20.858201 14503 main.go:141] libmachine: Decoding PEM data...
I0923 10:21:20.858217 14503 main.go:141] libmachine: Parsing certificate...
I0923 10:21:20.858272 14503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19689-3689/.minikube/certs/cert.pem
I0923 10:21:20.858291 14503 main.go:141] libmachine: Decoding PEM data...
I0923 10:21:20.858304 14503 main.go:141] libmachine: Parsing certificate...
I0923 10:21:20.858586 14503 client.go:171] duration metric: took 493.569µs to LocalClient.Create
I0923 10:21:20.858608 14503 start.go:167] duration metric: took 556.143µs to libmachine.API.Create "minikube"
I0923 10:21:20.858613 14503 start.go:293] postStartSetup for "minikube" (driver="none")
I0923 10:21:20.858654 14503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 10:21:20.858698 14503 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 10:21:20.866594 14503 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 10:21:20.866613 14503 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 10:21:20.866622 14503 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 10:21:20.868535 14503 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0923 10:21:20.869869 14503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3689/.minikube/addons for local assets ...
I0923 10:21:20.869932 14503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3689/.minikube/files for local assets ...
I0923 10:21:20.869953 14503 start.go:296] duration metric: took 11.335604ms for postStartSetup
I0923 10:21:20.870612 14503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/config.json ...
I0923 10:21:20.870745 14503 start.go:128] duration metric: took 17.890139ms to createHost
I0923 10:21:20.870758 14503 start.go:83] releasing machines lock for "minikube", held for 17.976336ms
I0923 10:21:20.871120 14503 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 10:21:20.871243 14503 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0923 10:21:20.873040 14503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0923 10:21:20.873098 14503 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 10:21:20.883725 14503 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0923 10:21:20.883754 14503 start.go:495] detecting cgroup driver to use...
I0923 10:21:20.883783 14503 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 10:21:20.883898 14503 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 10:21:20.904572 14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 10:21:20.914147 14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 10:21:20.925632 14503 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 10:21:20.925689 14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 10:21:20.936554 14503 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 10:21:20.948083 14503 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 10:21:20.958960 14503 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 10:21:20.969337 14503 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 10:21:20.977937 14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 10:21:20.986784 14503 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 10:21:20.996118 14503 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 10:21:21.005327 14503 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 10:21:21.013802 14503 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 10:21:21.022053 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:21.259417 14503 exec_runner.go:51] Run: sudo systemctl restart containerd
I0923 10:21:21.323133 14503 start.go:495] detecting cgroup driver to use...
I0923 10:21:21.323178 14503 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 10:21:21.323321 14503 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 10:21:21.342873 14503 exec_runner.go:51] Run: which cri-dockerd
I0923 10:21:21.343758 14503 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0923 10:21:21.353300 14503 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0923 10:21:21.353328 14503 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 10:21:21.353362 14503 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 10:21:21.361144 14503 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0923 10:21:21.361325 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1441809314 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 10:21:21.370443 14503 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0923 10:21:21.594413 14503 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0923 10:21:21.822571 14503 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0923 10:21:21.822734 14503 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0923 10:21:21.822748 14503 exec_runner.go:203] rm: /etc/docker/daemon.json
I0923 10:21:21.822786 14503 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0923 10:21:21.831865 14503 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0923 10:21:21.831999 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2141491122 /etc/docker/daemon.json
I0923 10:21:21.841304 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:22.077085 14503 exec_runner.go:51] Run: sudo systemctl restart docker
I0923 10:21:22.374121 14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0923 10:21:22.385617 14503 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0923 10:21:22.402176 14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 10:21:22.415426 14503 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0923 10:21:22.661879 14503 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0923 10:21:22.883350 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:23.113204 14503 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0923 10:21:23.126919 14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 10:21:23.137701 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:23.364075 14503 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0923 10:21:23.435451 14503 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0923 10:21:23.435548 14503 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0923 10:21:23.437235 14503 start.go:563] Will wait 60s for crictl version
I0923 10:21:23.437279 14503 exec_runner.go:51] Run: which crictl
I0923 10:21:23.438148 14503 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0923 10:21:23.469977 14503 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0923 10:21:23.470044 14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 10:21:23.491325 14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 10:21:23.514973 14503 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0923 10:21:23.515052 14503 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0923 10:21:23.518190 14503 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0923 10:21:23.519499 14503 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 10:21:23.519608 14503 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:21:23.519619 14503 kubeadm.go:934] updating node { 10.150.0.16 8443 v1.31.1 docker true true} ...
I0923 10:21:23.519710 14503 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-14 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.150.0.16 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0923 10:21:23.519755 14503 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0923 10:21:23.568548 14503 cni.go:84] Creating CNI manager for ""
I0923 10:21:23.568576 14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:21:23.568586 14503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 10:21:23.568606 14503 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.150.0.16 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-14 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.150.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.150.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 10:21:23.568743 14503 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.150.0.16
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-14"
kubeletExtraArgs:
node-ip: 10.150.0.16
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.150.0.16"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 10:21:23.568799 14503 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 10:21:23.577944 14503 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0923 10:21:23.578005 14503 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0923 10:21:23.585875 14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0923 10:21:23.585886 14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0923 10:21:23.585899 14503 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0923 10:21:23.585923 14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0923 10:21:23.585962 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0923 10:21:23.585968 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0923 10:21:23.598069 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0923 10:21:23.636445 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube921111766 /var/lib/minikube/binaries/v1.31.1/kubectl
I0923 10:21:23.646655 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2618589158 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0923 10:21:23.671937 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563588793 /var/lib/minikube/binaries/v1.31.1/kubelet
I0923 10:21:23.738901 14503 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 10:21:23.748287 14503 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0923 10:21:23.748312 14503 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 10:21:23.748357 14503 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 10:21:23.756247 14503 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I0923 10:21:23.756397 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4278880996 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 10:21:23.764888 14503 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0923 10:21:23.764912 14503 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0923 10:21:23.764953 14503 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0923 10:21:23.772935 14503 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 10:21:23.773098 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1854200395 /lib/systemd/system/kubelet.service
I0923 10:21:23.781390 14503 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
I0923 10:21:23.781522 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1075798674 /var/tmp/minikube/kubeadm.yaml.new
I0923 10:21:23.790285 14503 exec_runner.go:51] Run: grep 10.150.0.16 control-plane.minikube.internal$ /etc/hosts
I0923 10:21:23.791817 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:24.018441 14503 exec_runner.go:51] Run: sudo systemctl start kubelet
I0923 10:21:24.034684 14503 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube for IP: 10.150.0.16
I0923 10:21:24.034707 14503 certs.go:194] generating shared ca certs ...
I0923 10:21:24.034729 14503 certs.go:226] acquiring lock for ca certs: {Name:mk10a034bcc1c0616fe44cc8e593fe0ec22b8be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.034884 14503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.key
I0923 10:21:24.034947 14503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.key
I0923 10:21:24.034961 14503 certs.go:256] generating profile certs ...
I0923 10:21:24.035037 14503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key
I0923 10:21:24.035056 14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt with IP's: []
I0923 10:21:24.180531 14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt ...
I0923 10:21:24.180565 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.crt: {Name:mk5288fe1432e0a766b450b6d8afe83611266a2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.180704 14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key ...
I0923 10:21:24.180747 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/client.key: {Name:mkde2adf88a49c1bb64334f50498662801f53efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.180821 14503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0
I0923 10:21:24.180835 14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.150.0.16]
I0923 10:21:24.232375 14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 ...
I0923 10:21:24.232403 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0: {Name:mk4a77e46b001160b446596215b33355b2c74ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.232522 14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0 ...
I0923 10:21:24.232532 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0: {Name:mkf3b2b285a9e42cf9282d7aa9018345ee355df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.232587 14503 certs.go:381] copying /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt.d7fe11b0 -> /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt
I0923 10:21:24.232670 14503 certs.go:385] copying /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key.d7fe11b0 -> /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key
I0923 10:21:24.232725 14503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key
I0923 10:21:24.232736 14503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0923 10:21:24.286918 14503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt ...
I0923 10:21:24.286947 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0cacb6cbb991de10b5cbab21f78b060a583593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.287068 14503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key ...
I0923 10:21:24.287078 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key: {Name:mke22c67c96fa7a6327fc541375f15884f31ba42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:24.287246 14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca-key.pem (1675 bytes)
I0923 10:21:24.287282 14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/ca.pem (1078 bytes)
I0923 10:21:24.287305 14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/cert.pem (1123 bytes)
I0923 10:21:24.287330 14503 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3689/.minikube/certs/key.pem (1679 bytes)
I0923 10:21:24.287871 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 10:21:24.288006 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2089542381 /var/lib/minikube/certs/ca.crt
I0923 10:21:24.296864 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 10:21:24.296998 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2244881567 /var/lib/minikube/certs/ca.key
I0923 10:21:24.306200 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 10:21:24.306327 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2412715834 /var/lib/minikube/certs/proxy-client-ca.crt
I0923 10:21:24.314609 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0923 10:21:24.314778 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2217015731 /var/lib/minikube/certs/proxy-client-ca.key
I0923 10:21:24.323830 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0923 10:21:24.323980 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3511140915 /var/lib/minikube/certs/apiserver.crt
I0923 10:21:24.333484 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0923 10:21:24.333630 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube297324337 /var/lib/minikube/certs/apiserver.key
I0923 10:21:24.342308 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 10:21:24.342476 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3067266897 /var/lib/minikube/certs/proxy-client.crt
I0923 10:21:24.351287 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0923 10:21:24.351421 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4278780158 /var/lib/minikube/certs/proxy-client.key
I0923 10:21:24.360613 14503 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0923 10:21:24.360636 14503 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.360669 14503 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.368423 14503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 10:21:24.368575 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3569917701 /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.377349 14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 10:21:24.377541 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1271971900 /var/lib/minikube/kubeconfig
I0923 10:21:24.385873 14503 exec_runner.go:51] Run: openssl version
I0923 10:21:24.388759 14503 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 10:21:24.398659 14503 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.399990 14503 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 10:21 /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.400036 14503 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 10:21:24.402954 14503 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 10:21:24.411416 14503 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 10:21:24.412601 14503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 10:21:24.412641 14503 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 10:21:24.412754 14503 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0923 10:21:24.428157 14503 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 10:21:24.437463 14503 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 10:21:24.446461 14503 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 10:21:24.467863 14503 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 10:21:24.476588 14503 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 10:21:24.476610 14503 kubeadm.go:157] found existing configuration files:
I0923 10:21:24.476651 14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 10:21:24.484402 14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 10:21:24.484458 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 10:21:24.491914 14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 10:21:24.499754 14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 10:21:24.499819 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 10:21:24.507842 14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 10:21:24.516201 14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 10:21:24.516256 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 10:21:24.525186 14503 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 10:21:24.533399 14503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 10:21:24.533461 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 10:21:24.541349 14503 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0923 10:21:24.578100 14503 kubeadm.go:310] W0923 10:21:24.577939 15376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 10:21:24.578656 14503 kubeadm.go:310] W0923 10:21:24.578599 15376 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 10:21:24.580257 14503 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 10:21:24.580283 14503 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 10:21:24.671178 14503 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 10:21:24.671303 14503 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 10:21:24.671316 14503 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 10:21:24.671321 14503 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 10:21:24.681280 14503 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 10:21:24.685058 14503 out.go:235] - Generating certificates and keys ...
I0923 10:21:24.685117 14503 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 10:21:24.685129 14503 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 10:21:24.851793 14503 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 10:21:25.045916 14503 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 10:21:25.084012 14503 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 10:21:25.345658 14503 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 10:21:25.466973 14503 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 10:21:25.467008 14503 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-14] and IPs [10.150.0.16 127.0.0.1 ::1]
I0923 10:21:25.530655 14503 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 10:21:25.530785 14503 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-14] and IPs [10.150.0.16 127.0.0.1 ::1]
I0923 10:21:25.745130 14503 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 10:21:25.932003 14503 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 10:21:26.062990 14503 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 10:21:26.063162 14503 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 10:21:26.262182 14503 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 10:21:26.549221 14503 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 10:21:26.716717 14503 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 10:21:26.766183 14503 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 10:21:27.083988 14503 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 10:21:27.084541 14503 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 10:21:27.086810 14503 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 10:21:27.088923 14503 out.go:235] - Booting up control plane ...
I0923 10:21:27.088960 14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 10:21:27.088982 14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 10:21:27.089435 14503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 10:21:27.114718 14503 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 10:21:27.120016 14503 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 10:21:27.120050 14503 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 10:21:27.357126 14503 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 10:21:27.357155 14503 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 10:21:27.858879 14503 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.724184ms
I0923 10:21:27.858905 14503 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 10:21:32.860566 14503 kubeadm.go:310] [api-check] The API server is healthy after 5.001673562s
I0923 10:21:32.873499 14503 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 10:21:32.885632 14503 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 10:21:32.907676 14503 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 10:21:32.907702 14503 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-14 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 10:21:32.916273 14503 kubeadm.go:310] [bootstrap-token] Using token: 159jws.lpwtfljcxiulbgh7
I0923 10:21:32.917661 14503 out.go:235] - Configuring RBAC rules ...
I0923 10:21:32.917688 14503 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 10:21:32.921368 14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 10:21:32.928142 14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 10:21:32.930689 14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 10:21:32.934937 14503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 10:21:32.937751 14503 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 10:21:33.268626 14503 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 10:21:33.687519 14503 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 10:21:34.267786 14503 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 10:21:34.268645 14503 kubeadm.go:310]
I0923 10:21:34.268657 14503 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 10:21:34.268661 14503 kubeadm.go:310]
I0923 10:21:34.268666 14503 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 10:21:34.268670 14503 kubeadm.go:310]
I0923 10:21:34.268675 14503 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 10:21:34.268679 14503 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 10:21:34.268682 14503 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 10:21:34.268686 14503 kubeadm.go:310]
I0923 10:21:34.268689 14503 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 10:21:34.268693 14503 kubeadm.go:310]
I0923 10:21:34.268697 14503 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 10:21:34.268700 14503 kubeadm.go:310]
I0923 10:21:34.268704 14503 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 10:21:34.268707 14503 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 10:21:34.268711 14503 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 10:21:34.268715 14503 kubeadm.go:310]
I0923 10:21:34.268719 14503 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 10:21:34.268723 14503 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 10:21:34.268727 14503 kubeadm.go:310]
I0923 10:21:34.268730 14503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 159jws.lpwtfljcxiulbgh7 \
I0923 10:21:34.268735 14503 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:912df576ac3a30e2c8fe7e582ef2e1cefa71f1abe1ae22d12bbdb9d33952da04 \
I0923 10:21:34.268739 14503 kubeadm.go:310] --control-plane
I0923 10:21:34.268743 14503 kubeadm.go:310]
I0923 10:21:34.268747 14503 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 10:21:34.268751 14503 kubeadm.go:310]
I0923 10:21:34.268755 14503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 159jws.lpwtfljcxiulbgh7 \
I0923 10:21:34.268759 14503 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:912df576ac3a30e2c8fe7e582ef2e1cefa71f1abe1ae22d12bbdb9d33952da04
I0923 10:21:34.271529 14503 cni.go:84] Creating CNI manager for ""
I0923 10:21:34.271553 14503 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 10:21:34.273534 14503 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0923 10:21:34.274871 14503 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0923 10:21:34.286762 14503 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0923 10:21:34.286920 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4250734174 /etc/cni/net.d/1-k8s.conflist
I0923 10:21:34.298046 14503 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 10:21:34.298186 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-14 minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0923 10:21:34.298208 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:34.306828 14503 ops.go:34] apiserver oom_adj: -16
I0923 10:21:34.367518 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:34.867819 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:35.368311 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:35.868488 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:36.368174 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:36.868201 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:37.368120 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:37.867711 14503 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 10:21:37.939149 14503 kubeadm.go:1113] duration metric: took 3.641054991s to wait for elevateKubeSystemPrivileges
I0923 10:21:37.939185 14503 kubeadm.go:394] duration metric: took 13.526545701s to StartCluster
I0923 10:21:37.939206 14503 settings.go:142] acquiring lock: {Name:mk859aef9f68053644345f1d9ec880181c903239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:37.939269 14503 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19689-3689/kubeconfig
I0923 10:21:37.939984 14503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3689/kubeconfig: {Name:mk51e817e2092847322764330e83dc7db829c6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 10:21:37.940203 14503 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 10:21:37.940254 14503 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 10:21:37.940341 14503 addons.go:69] Setting yakd=true in profile "minikube"
I0923 10:21:37.940355 14503 addons.go:69] Setting metrics-server=true in profile "minikube"
I0923 10:21:37.940370 14503 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0923 10:21:37.940378 14503 addons.go:234] Setting addon metrics-server=true in "minikube"
I0923 10:21:37.940391 14503 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0923 10:21:37.940381 14503 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0923 10:21:37.940415 14503 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0923 10:21:37.940362 14503 addons.go:234] Setting addon yakd=true in "minikube"
I0923 10:21:37.940421 14503 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:21:37.940437 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.940453 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.940441 14503 addons.go:69] Setting volcano=true in profile "minikube"
I0923 10:21:37.940468 14503 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0923 10:21:37.940486 14503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0923 10:21:37.940490 14503 addons.go:234] Setting addon volcano=true in "minikube"
I0923 10:21:37.940488 14503 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0923 10:21:37.940516 14503 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0923 10:21:37.940540 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.940541 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.941041 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941067 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941070 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941087 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941104 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.941126 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.941145 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941161 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941198 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.941256 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941278 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941290 14503 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0923 10:21:37.941310 14503 mustload.go:65] Loading cluster: minikube
I0923 10:21:37.941315 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.941496 14503 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:21:37.941586 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941616 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941625 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.941640 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.941668 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.941764 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.942221 14503 out.go:177] * Configuring local host environment ...
I0923 10:21:37.942383 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.942403 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.942442 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.940415 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.942475 14503 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0923 10:21:37.942492 14503 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0923 10:21:37.942519 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.942878 14503 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0923 10:21:37.942932 14503 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0923 10:21:37.942968 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.943207 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.943221 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.943250 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.943279 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.943293 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.943342 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.943539 14503 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0923 10:21:37.943563 14503 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0923 10:21:37.943588 14503 host.go:66] Checking if "minikube" exists ...
W0923 10:21:37.943697 14503 out.go:270] *
W0923 10:21:37.943715 14503 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0923 10:21:37.943746 14503 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0923 10:21:37.943759 14503 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0923 10:21:37.943785 14503 out.go:270] *
W0923 10:21:37.943858 14503 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0923 10:21:37.943869 14503 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0923 10:21:37.943875 14503 out.go:270] *
W0923 10:21:37.943897 14503 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0923 10:21:37.943911 14503 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0923 10:21:37.943944 14503 out.go:270] *
W0923 10:21:37.943953 14503 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0923 10:21:37.943987 14503 start.go:235] Will wait 6m0s for node &{Name: IP:10.150.0.16 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 10:21:37.942461 14503 addons.go:69] Setting registry=true in profile "minikube"
I0923 10:21:37.944500 14503 addons.go:234] Setting addon registry=true in "minikube"
I0923 10:21:37.944575 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.945226 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.945246 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.945275 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.945350 14503 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0923 10:21:37.945375 14503 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0923 10:21:37.945397 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:37.945880 14503 out.go:177] * Verifying Kubernetes components...
I0923 10:21:37.947410 14503 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 10:21:37.961276 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.961374 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.961474 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.961979 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.962595 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.977055 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:37.979263 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.979294 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.979328 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.980028 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.980051 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.980082 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.980366 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:37.980382 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:37.980413 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:37.985305 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:37.985373 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:37.987781 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:37.987862 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:37.993641 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:37.993718 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:37.995086 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:37.995216 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:37.995296 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:37.995350 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:37.997737 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.001545 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.001616 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.008676 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.010658 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.010875 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.016486 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.016537 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.018176 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.018202 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.024504 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.024533 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.025117 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.025139 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.026399 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.028550 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.030068 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.030273 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.031272 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.032430 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.032489 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.033585 14503 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0923 10:21:38.033629 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:38.033840 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.033996 14503 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0923 10:21:38.034040 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:38.038793 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.038825 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.040028 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:38.040854 14503 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 10:21:38.041435 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.041462 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.042507 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.042972 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:38.042995 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:38.043024 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:38.040049 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:38.043195 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:38.043500 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.044189 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 10:21:38.044208 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.044239 14503 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 10:21:38.044277 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.044432 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1246490440 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 10:21:38.046494 14503 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 10:21:38.046837 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.048602 14503 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0923 10:21:38.049349 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.049370 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.049812 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.049857 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.049870 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.049883 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.051467 14503 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0923 10:21:38.052689 14503 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0923 10:21:38.054046 14503 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.054071 14503 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0923 10:21:38.054078 14503 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.054173 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.055240 14503 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 10:21:38.055281 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0923 10:21:38.055730 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube707534564 /etc/kubernetes/addons/volcano-deployment.yaml
I0923 10:21:38.057880 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.057935 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.057889 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.064632 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.065750 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.067312 14503 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 10:21:38.068433 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.068454 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:38.068641 14503 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 10:21:38.068668 14503 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 10:21:38.068809 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3746917818 /etc/kubernetes/addons/ig-namespace.yaml
I0923 10:21:38.070167 14503 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 10:21:38.071603 14503 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 10:21:38.071634 14503 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 10:21:38.071759 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1057678417 /etc/kubernetes/addons/yakd-ns.yaml
I0923 10:21:38.074063 14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 10:21:38.074088 14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 10:21:38.074403 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1547504557 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 10:21:38.075675 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.075695 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.083687 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.083932 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.083953 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.084435 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.084493 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.085202 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.085220 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.085684 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.087963 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 10:21:38.089534 14503 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 10:21:38.091333 14503 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 10:21:38.092616 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.092620 14503 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 10:21:38.092645 14503 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 10:21:38.092768 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2739954059 /etc/kubernetes/addons/yakd-sa.yaml
I0923 10:21:38.093284 14503 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 10:21:38.094310 14503 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 10:21:38.094535 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.095183 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:38.095892 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 10:21:38.095952 14503 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 10:21:38.095975 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 10:21:38.096102 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1590517906 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 10:21:38.096295 14503 out.go:177] - Using image docker.io/registry:2.8.3
I0923 10:21:38.097483 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 10:21:38.097543 14503 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 10:21:38.097961 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 10:21:38.100458 14503 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 10:21:38.100501 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 10:21:38.100637 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3362508289 /etc/kubernetes/addons/registry-rc.yaml
I0923 10:21:38.102550 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 10:21:38.103934 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 10:21:38.105329 14503 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 10:21:38.106572 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 10:21:38.106606 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 10:21:38.106718 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3016158629 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 10:21:38.109647 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.109711 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.112788 14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 10:21:38.112819 14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 10:21:38.112966 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2370514550 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 10:21:38.113198 14503 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 10:21:38.113228 14503 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 10:21:38.113336 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4017615969 /etc/kubernetes/addons/yakd-crb.yaml
I0923 10:21:38.116562 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.116619 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.116798 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:38.116840 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:38.118253 14503 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 10:21:38.118273 14503 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 10:21:38.118368 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1751057690 /etc/kubernetes/addons/registry-svc.yaml
I0923 10:21:38.118635 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.118661 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.123426 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.125192 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.125216 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.125723 14503 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 10:21:38.127186 14503 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 10:21:38.127215 14503 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 10:21:38.127344 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423584811 /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 10:21:38.132074 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.134022 14503 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 10:21:38.135257 14503 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 10:21:38.135287 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 10:21:38.135423 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3071771337 /etc/kubernetes/addons/deployment.yaml
I0923 10:21:38.137752 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 10:21:38.143398 14503 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 10:21:38.143433 14503 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 10:21:38.143574 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3484133606 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 10:21:38.145608 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 10:21:38.145633 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 10:21:38.145729 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2319206378 /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 10:21:38.220476 14503 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 10:21:38.220516 14503 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 10:21:38.220561 14503 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 10:21:38.220596 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 10:21:38.220647 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1180843149 /etc/kubernetes/addons/ig-role.yaml
I0923 10:21:38.220922 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube703366994 /etc/kubernetes/addons/registry-proxy.yaml
I0923 10:21:38.222619 14503 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 10:21:38.222653 14503 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 10:21:38.222883 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3828019453 /etc/kubernetes/addons/yakd-svc.yaml
I0923 10:21:38.223276 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 10:21:38.226246 14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 10:21:38.226286 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 10:21:38.226494 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1447327316 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 10:21:38.227916 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.227949 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.229640 14503 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 10:21:38.229672 14503 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 10:21:38.229866 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1613859934 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 10:21:38.233867 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.233920 14503 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:38.233938 14503 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0923 10:21:38.233946 14503 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:38.233991 14503 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:38.240370 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:38.240404 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:38.245559 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:38.247457 14503 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 10:21:38.247637 14503 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 10:21:38.247666 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 10:21:38.248149 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1416855435 /etc/kubernetes/addons/yakd-dp.yaml
I0923 10:21:38.250542 14503 out.go:177] - Using image docker.io/busybox:stable
I0923 10:21:38.251844 14503 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 10:21:38.251876 14503 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 10:21:38.251933 14503 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 10:21:38.251957 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 10:21:38.252001 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube260296824 /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 10:21:38.252073 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2246154247 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 10:21:38.253608 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 10:21:38.253651 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 10:21:38.253790 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2527539954 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 10:21:38.256524 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 10:21:38.256844 14503 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 10:21:38.256993 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube584424061 /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:38.263302 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 10:21:38.263524 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3595981532 /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.272541 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 10:21:38.277769 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 10:21:38.286678 14503 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 10:21:38.286725 14503 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 10:21:38.286809 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 10:21:38.286874 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube765725782 /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 10:21:38.291400 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 10:21:38.291441 14503 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 10:21:38.291606 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2689698310 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 10:21:38.308983 14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 10:21:38.309027 14503 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 10:21:38.309165 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1470380499 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 10:21:38.309719 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 10:21:38.321989 14503 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:38.322041 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 10:21:38.322260 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1265008389 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:38.332179 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 10:21:38.332220 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 10:21:38.332398 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3455428546 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 10:21:38.335259 14503 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 10:21:38.335288 14503 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 10:21:38.335406 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube536218479 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 10:21:38.351497 14503 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 10:21:38.351535 14503 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 10:21:38.351692 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3160167729 /etc/kubernetes/addons/metrics-server-service.yaml
I0923 10:21:38.358319 14503 exec_runner.go:51] Run: sudo systemctl start kubelet
I0923 10:21:38.358629 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 10:21:38.358654 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 10:21:38.358789 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1681517526 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 10:21:38.360634 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:38.368087 14503 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 10:21:38.368121 14503 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 10:21:38.368255 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1546842042 /etc/kubernetes/addons/ig-crd.yaml
I0923 10:21:38.394341 14503 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0923 10:21:38.440062 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 10:21:38.441391 14503 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 10:21:38.441419 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 10:21:38.441538 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2448725208 /etc/kubernetes/addons/ig-daemonset.yaml
I0923 10:21:38.447824 14503 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-14" to be "Ready" ...
I0923 10:21:38.452045 14503 node_ready.go:49] node "ubuntu-20-agent-14" has status "Ready":"True"
I0923 10:21:38.452068 14503 node_ready.go:38] duration metric: took 4.126904ms for node "ubuntu-20-agent-14" to be "Ready" ...
I0923 10:21:38.452079 14503 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 10:21:38.459817 14503 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:38.472260 14503 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 10:21:38.472292 14503 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 10:21:38.472427 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1973014927 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 10:21:38.495977 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 10:21:38.496009 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 10:21:38.496163 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3968753410 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 10:21:38.515500 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 10:21:38.525105 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 10:21:38.525142 14503 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 10:21:38.526396 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1694692617 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 10:21:38.572191 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 10:21:38.572222 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 10:21:38.572353 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube777964739 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 10:21:38.591594 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 10:21:38.591630 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 10:21:38.591786 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2691097496 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 10:21:38.643613 14503 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 10:21:38.643663 14503 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 10:21:38.644378 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube458797997 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 10:21:38.707122 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 10:21:38.903443 14503 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0923 10:21:38.906200 14503 addons.go:475] Verifying addon registry=true in "minikube"
I0923 10:21:38.908775 14503 out.go:177] * Verifying registry addon...
I0923 10:21:38.911934 14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 10:21:38.916883 14503 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 10:21:38.916908 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:39.260753 14503 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0923 10:21:39.427690 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:39.448260 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.175661017s)
I0923 10:21:39.517787 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.077669266s)
I0923 10:21:39.517824 14503 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0923 10:21:39.522220 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.006633645s)
I0923 10:21:39.921720 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:40.198768 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.838083967s)
W0923 10:21:40.198947 14503 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 10:21:40.198990 14503 retry.go:31] will retry after 252.085237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 10:21:40.415565 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:40.451990 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 10:21:40.465723 14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:40.927244 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:41.058966 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.960954263s)
I0923 10:21:41.394252 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.686972504s)
I0923 10:21:41.394310 14503 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0923 10:21:41.398029 14503 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 10:21:41.401452 14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:21:41.406508 14503 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:21:41.406542 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:41.416426 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:41.597886 14503 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.145807847s)
I0923 10:21:41.909255 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:41.916166 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:42.406457 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:42.416061 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:42.466592 14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:42.906788 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:42.916154 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:43.407086 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:43.416317 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:43.906645 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:43.915943 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:44.407111 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:44.415313 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:44.906059 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:44.916094 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:44.965990 14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:45.077264 14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 10:21:45.077408 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3374174569 /var/lib/minikube/google_application_credentials.json
I0923 10:21:45.089201 14503 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 10:21:45.089330 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2345360888 /var/lib/minikube/google_cloud_project
I0923 10:21:45.101068 14503 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0923 10:21:45.101126 14503 host.go:66] Checking if "minikube" exists ...
I0923 10:21:45.101958 14503 kubeconfig.go:125] found "minikube" server: "https://10.150.0.16:8443"
I0923 10:21:45.101982 14503 api_server.go:166] Checking apiserver status ...
I0923 10:21:45.102021 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:45.123893 14503 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15821/cgroup
I0923 10:21:45.137507 14503 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb"
I0923 10:21:45.137581 14503 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod805ddfb3afe8beaaeb1a27a5b27c62e1/e4810d0b22eb96c68ceb540d931e0c716ed34f495a180b50bbf5a4eb1a6e6afb/freezer.state
I0923 10:21:45.148293 14503 api_server.go:204] freezer state: "THAWED"
I0923 10:21:45.148321 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:45.152648 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:45.152803 14503 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 10:21:45.155723 14503 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 10:21:45.157298 14503 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 10:21:45.158581 14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 10:21:45.158626 14503 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 10:21:45.158778 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2179341002 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 10:21:45.169606 14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 10:21:45.169645 14503 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 10:21:45.169780 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube497909863 /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 10:21:45.181323 14503 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 10:21:45.181354 14503 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 10:21:45.181461 14503 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3084832486 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 10:21:45.192626 14503 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 10:21:45.406175 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:45.415099 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:46.083415 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:46.084393 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:46.194451 14503 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0923 10:21:46.195974 14503 out.go:177] * Verifying gcp-auth addon...
I0923 10:21:46.198098 14503 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 10:21:46.200381 14503 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 10:21:46.405750 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:46.416216 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:46.905397 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:46.915445 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:47.406770 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:47.415781 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:47.465907 14503 pod_ready.go:103] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"False"
I0923 10:21:47.906003 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:47.916353 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:48.507753 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:48.508565 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:48.908273 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:48.967062 14503 pod_ready.go:93] pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:48.967089 14503 pod_ready.go:82] duration metric: took 10.507247017s for pod "etcd-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:48.967099 14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:48.971977 14503 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:48.972004 14503 pod_ready.go:82] duration metric: took 4.897345ms for pod "kube-apiserver-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:48.972018 14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:48.977715 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:49.405863 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:49.415114 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:49.478325 14503 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:49.478344 14503 pod_ready.go:82] duration metric: took 506.318863ms for pod "kube-controller-manager-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:49.478354 14503 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:49.482300 14503 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace has status "Ready":"True"
I0923 10:21:49.482322 14503 pod_ready.go:82] duration metric: took 3.961039ms for pod "kube-scheduler-ubuntu-20-agent-14" in "kube-system" namespace to be "Ready" ...
I0923 10:21:49.482333 14503 pod_ready.go:39] duration metric: took 11.030240368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 10:21:49.482355 14503 api_server.go:52] waiting for apiserver process to appear ...
I0923 10:21:49.482413 14503 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 10:21:49.501517 14503 api_server.go:72] duration metric: took 11.557408673s to wait for apiserver process to appear ...
I0923 10:21:49.501548 14503 api_server.go:88] waiting for apiserver healthz status ...
I0923 10:21:49.501577 14503 api_server.go:253] Checking apiserver healthz at https://10.150.0.16:8443/healthz ...
I0923 10:21:49.505603 14503 api_server.go:279] https://10.150.0.16:8443/healthz returned 200:
ok
I0923 10:21:49.506538 14503 api_server.go:141] control plane version: v1.31.1
I0923 10:21:49.506565 14503 api_server.go:131] duration metric: took 5.009313ms to wait for apiserver health ...
I0923 10:21:49.506576 14503 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 10:21:49.514838 14503 system_pods.go:59] 16 kube-system pods found
I0923 10:21:49.514904 14503 system_pods.go:61] "coredns-7c65d6cfc9-5wzm7" [d5873fad-13d1-45af-a03b-45c4b855def2] Running
I0923 10:21:49.514917 14503 system_pods.go:61] "csi-hostpath-attacher-0" [3a0eea15-a39e-4405-8fbf-a222f5615313] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 10:21:49.514932 14503 system_pods.go:61] "csi-hostpath-resizer-0" [cabd0f23-eb05-4c15-b63d-a277f022f80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 10:21:49.514948 14503 system_pods.go:61] "csi-hostpathplugin-nfj4v" [79523175-97ee-406e-8d74-e3335dcfa6af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 10:21:49.514958 14503 system_pods.go:61] "etcd-ubuntu-20-agent-14" [3a571447-937e-4b39-9db1-70ea79ff7a4a] Running
I0923 10:21:49.514964 14503 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-14" [b02db37b-9c5b-4647-8204-a20e7ed4e588] Running
I0923 10:21:49.514970 14503 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-14" [a111e4cd-7725-4c2a-af1f-562e8673fdc1] Running
I0923 10:21:49.514975 14503 system_pods.go:61] "kube-proxy-9rf8g" [8b8f2ed9-4e48-4f1b-90db-03cbeacc08a1] Running
I0923 10:21:49.514980 14503 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-14" [81113013-462f-4e54-869e-86ea8ab47602] Running
I0923 10:21:49.514990 14503 system_pods.go:61] "metrics-server-84c5f94fbc-nnrdh" [f57d2252-a248-4969-9111-da3afb4eebd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 10:21:49.514995 14503 system_pods.go:61] "nvidia-device-plugin-daemonset-t8s2p" [7c3d0947-5713-4d85-a7a0-09660e93cfcd] Running
I0923 10:21:49.515003 14503 system_pods.go:61] "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 10:21:49.515010 14503 system_pods.go:61] "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 10:21:49.515016 14503 system_pods.go:61] "snapshot-controller-56fcc65765-q8vm4" [29d8b561-2be7-4cb9-8726-2ca9502c446f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:49.515023 14503 system_pods.go:61] "snapshot-controller-56fcc65765-w9bmc" [d74b1548-465c-47f2-b66f-97290b19df8a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:49.515026 14503 system_pods.go:61] "storage-provisioner" [18decbc8-e338-4af9-82cc-f90640dc8db2] Running
I0923 10:21:49.515034 14503 system_pods.go:74] duration metric: took 8.452445ms to wait for pod list to return data ...
I0923 10:21:49.515042 14503 default_sa.go:34] waiting for default service account to be created ...
I0923 10:21:49.517751 14503 default_sa.go:45] found service account: "default"
I0923 10:21:49.517772 14503 default_sa.go:55] duration metric: took 2.724884ms for default service account to be created ...
I0923 10:21:49.517780 14503 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 10:21:49.570028 14503 system_pods.go:86] 16 kube-system pods found
I0923 10:21:49.570079 14503 system_pods.go:89] "coredns-7c65d6cfc9-5wzm7" [d5873fad-13d1-45af-a03b-45c4b855def2] Running
I0923 10:21:49.570093 14503 system_pods.go:89] "csi-hostpath-attacher-0" [3a0eea15-a39e-4405-8fbf-a222f5615313] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 10:21:49.570395 14503 system_pods.go:89] "csi-hostpath-resizer-0" [cabd0f23-eb05-4c15-b63d-a277f022f80b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 10:21:49.570432 14503 system_pods.go:89] "csi-hostpathplugin-nfj4v" [79523175-97ee-406e-8d74-e3335dcfa6af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 10:21:49.570443 14503 system_pods.go:89] "etcd-ubuntu-20-agent-14" [3a571447-937e-4b39-9db1-70ea79ff7a4a] Running
I0923 10:21:49.570451 14503 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-14" [b02db37b-9c5b-4647-8204-a20e7ed4e588] Running
I0923 10:21:49.570462 14503 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-14" [a111e4cd-7725-4c2a-af1f-562e8673fdc1] Running
I0923 10:21:49.570469 14503 system_pods.go:89] "kube-proxy-9rf8g" [8b8f2ed9-4e48-4f1b-90db-03cbeacc08a1] Running
I0923 10:21:49.570480 14503 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-14" [81113013-462f-4e54-869e-86ea8ab47602] Running
I0923 10:21:49.570491 14503 system_pods.go:89] "metrics-server-84c5f94fbc-nnrdh" [f57d2252-a248-4969-9111-da3afb4eebd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 10:21:49.570506 14503 system_pods.go:89] "nvidia-device-plugin-daemonset-t8s2p" [7c3d0947-5713-4d85-a7a0-09660e93cfcd] Running
I0923 10:21:49.570522 14503 system_pods.go:89] "registry-66c9cd494c-8hvdw" [678aa223-edb6-4a6c-b3e5-5d95e0ea40f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 10:21:49.570538 14503 system_pods.go:89] "registry-proxy-4nzb4" [35894a53-f7e8-4743-9eea-200f3986fcd6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 10:21:49.570554 14503 system_pods.go:89] "snapshot-controller-56fcc65765-q8vm4" [29d8b561-2be7-4cb9-8726-2ca9502c446f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:49.570572 14503 system_pods.go:89] "snapshot-controller-56fcc65765-w9bmc" [d74b1548-465c-47f2-b66f-97290b19df8a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 10:21:49.570587 14503 system_pods.go:89] "storage-provisioner" [18decbc8-e338-4af9-82cc-f90640dc8db2] Running
I0923 10:21:49.570600 14503 system_pods.go:126] duration metric: took 52.812878ms to wait for k8s-apps to be running ...
I0923 10:21:49.570612 14503 system_svc.go:44] waiting for kubelet service to be running ....
I0923 10:21:49.570678 14503 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0923 10:21:49.587814 14503 system_svc.go:56] duration metric: took 17.18863ms WaitForService to wait for kubelet
I0923 10:21:49.587851 14503 kubeadm.go:582] duration metric: took 11.643747324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 10:21:49.587876 14503 node_conditions.go:102] verifying NodePressure condition ...
I0923 10:21:49.764259 14503 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0923 10:21:49.764294 14503 node_conditions.go:123] node cpu capacity is 8
I0923 10:21:49.764308 14503 node_conditions.go:105] duration metric: took 176.426384ms to run NodePressure ...
I0923 10:21:49.764322 14503 start.go:241] waiting for startup goroutines ...
I0923 10:21:49.906521 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:49.915933 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:50.405140 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:50.415221 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:50.906978 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:50.915763 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:51.406053 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:51.415144 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:51.907475 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:51.914953 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:52.405726 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:52.415656 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:52.906194 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:52.915150 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:53.405748 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:53.415910 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:53.906513 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:53.915924 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:54.406225 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:54.416164 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:54.905488 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:54.915562 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:55.406820 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:55.415665 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:55.906387 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:55.915427 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 10:21:56.406971 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:56.415637 14503 kapi.go:107] duration metric: took 17.503706177s to wait for kubernetes.io/minikube-addons=registry ...
I0923 10:21:56.906672 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:57.406782 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:57.905844 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:58.406679 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:58.906888 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:59.407712 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:21:59.906319 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:00.406397 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:00.905457 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:01.406653 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:01.907480 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:02.405985 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:02.906867 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:03.406243 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:03.906102 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:04.406339 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:04.906013 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:05.406140 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:05.906651 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:06.406703 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:06.906665 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:07.406169 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:07.906078 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:08.407243 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:08.906711 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:09.405824 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:09.905690 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:10.407459 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:10.907268 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:11.406613 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 10:22:11.905038 14503 kapi.go:107] duration metric: took 30.503589218s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 10:22:27.701454 14503 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 10:22:27.701485 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:28.201668 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:28.702152 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:29.201240 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:29.700555 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:30.201894 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:30.702345 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:31.202002 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:31.701122 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:32.201037 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:32.701317 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:33.200793 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:33.702028 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:34.201583 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:34.701290 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:35.202604 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:35.701699 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:36.201941 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:36.702056 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:37.201077 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:37.700789 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:38.201104 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:38.700621 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:39.201685 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:39.701535 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:40.201586 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:40.701820 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:41.201406 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:41.700970 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:42.201917 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:42.702177 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:43.201160 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:43.701568 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:44.201598 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:44.701363 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:45.201083 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:45.700853 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:46.200881 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:46.701700 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:47.201549 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:47.701006 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:48.200907 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:48.701122 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:49.201471 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:49.701247 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:50.201325 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:50.701438 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:51.201134 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:51.700876 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:52.201678 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:52.701894 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:53.220060 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:53.703774 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:54.201081 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:54.701812 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:55.202019 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:55.701194 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:56.201198 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:56.700780 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:57.201886 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:57.701957 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:58.200797 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:58.702247 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:59.201177 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:22:59.700825 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:00.201713 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:00.702049 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:01.202019 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:01.701014 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:02.201226 14503 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 10:23:02.701493 14503 kapi.go:107] duration metric: took 1m16.503392252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0923 10:23:02.703705 14503 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0923 10:23:02.705378 14503 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0923 10:23:02.707040 14503 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0923 10:23:02.708912 14503 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0923 10:23:02.710572 14503 addons.go:510] duration metric: took 1m24.77033752s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner yakd storage-provisioner-rancher metrics-server inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0923 10:23:02.710628 14503 start.go:246] waiting for cluster config update ...
I0923 10:23:02.710651 14503 start.go:255] writing updated cluster config ...
I0923 10:23:02.710989 14503 exec_runner.go:51] Run: rm -f paused
I0923 10:23:02.756604 14503 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 10:23:02.758717 14503 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Mon 2024-08-26 20:51:04 UTC, end at Mon 2024-09-23 10:32:55 UTC. --
Sep 23 10:25:04 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:25:04Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 23 10:25:06 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:06.289362488Z" level=info msg="ignoring event" container=7f298b17f6ef65355bf64e73777e1e5e98f9121a93deedae419228f701a7e404 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:25:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:07.811464067Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=83935512cb0e36c8 traceID=3dd788051f76aaa6ebee96a31b148398
Sep 23 10:25:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:25:07.815270548Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=83935512cb0e36c8 traceID=3dd788051f76aaa6ebee96a31b148398
Sep 23 10:26:32 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:26:32.817014659Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=8a11cc7f365f88f6 traceID=fa8bf6c20e5e07ea007b4d1ec84d7e89
Sep 23 10:26:32 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:26:32.819193473Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=8a11cc7f365f88f6 traceID=fa8bf6c20e5e07ea007b4d1ec84d7e89
Sep 23 10:27:47 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:27:47Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 23 10:27:49 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:27:49.363106193Z" level=info msg="ignoring event" container=19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:29:20 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:29:20.810253208Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=521730c1c60d4162 traceID=d8d809143cecf3f3830d65801de13869
Sep 23 10:29:20 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:29:20.812691319Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=521730c1c60d4162 traceID=d8d809143cecf3f3830d65801de13869
Sep 23 10:31:54 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:31:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f41a0ce81dad7e64e942ffd0ab659aa0bf5f6b16796f020c435fcdcbaa231cbb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 23 10:31:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:54.725607832Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8a767bd34081992d traceID=32016fc8fcd97f2d60da52ff6834b925
Sep 23 10:31:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:54.727862192Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8a767bd34081992d traceID=32016fc8fcd97f2d60da52ff6834b925
Sep 23 10:31:55 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:31:55.745458427Z" level=info msg="ignoring event" container=f41a0ce81dad7e64e942ffd0ab659aa0bf5f6b16796f020c435fcdcbaa231cbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:31:55 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:31:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ecd0359970545bb26f3f592747b58ffe3527516bf02a9d1ed47e1f7e0175dce/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 23 10:32:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:07.819425738Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=429ec37243ce98e4 traceID=0bfaf718ae9aa0af010f760207d714e3
Sep 23 10:32:07 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:07.821715925Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=429ec37243ce98e4 traceID=0bfaf718ae9aa0af010f760207d714e3
Sep 23 10:32:36 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:36.813953183Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a6d3e2506f50bafc traceID=15646b462a54bf9dfe4bcd65f35b1522
Sep 23 10:32:36 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:36.816413210Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a6d3e2506f50bafc traceID=15646b462a54bf9dfe4bcd65f35b1522
Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.271672386Z" level=info msg="ignoring event" container=8ecd0359970545bb26f3f592747b58ffe3527516bf02a9d1ed47e1f7e0175dce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.561368758Z" level=info msg="ignoring event" container=904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.619323276Z" level=info msg="ignoring event" container=c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.723900181Z" level=info msg="ignoring event" container=76622f96976a61ebb90e298bb01c77ca97f03d8efc0d4be247ee82e8bd518ed9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:32:54 ubuntu-20-agent-14 dockerd[14717]: time="2024-09-23T10:32:54.785114696Z" level=info msg="ignoring event" container=3d503d7acf0011f2fdf522e648365ff5ef0fb4386566446c3e637d26ce511ce4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 10:32:54 ubuntu-20-agent-14 cri-dockerd[15045]: time="2024-09-23T10:32:54Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
22352cc886702 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 1 second ago Running gadget 7 8eef71572cf09 gadget-w2hzg
19bc9bdaffa6c ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 5 minutes ago Exited gadget 6 8eef71572cf09 gadget-w2hzg
c300d56f5b854 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 b643a70104ccf gcp-auth-89d5ffd79-6kvbf
bfad91385c2d2 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
15acaecbec354 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
a5ec093612b98 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
46e9cced4c57d registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
d26c55945b0ac registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
d8e19afc126d9 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 0ccc27fcecd99 csi-hostpath-resizer-0
6ca977d99b2bd registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 e0644af257ee8 csi-hostpath-attacher-0
9a2be67883038 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 046b887c2a0d6 csi-hostpathplugin-nfj4v
71e3784777015 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 075e5b313127f snapshot-controller-56fcc65765-w9bmc
a8d5e1490050d registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 e877e9aa00ffc snapshot-controller-56fcc65765-q8vm4
4cbb9e4f61fd6 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 4e8e120c340de local-path-provisioner-86d989889c-x4dtk
64d8f5dd44360 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 b908ae4e50b0c metrics-server-84c5f94fbc-nnrdh
0a5c8535fcb39 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 66be03f8a385d yakd-dashboard-67d98fc6b-kf48r
c46422e23a32e gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 11 minutes ago Exited registry-proxy 0 3d503d7acf001 registry-proxy-4nzb4
904225ddf913e registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 11 minutes ago Exited registry 0 76622f96976a6 registry-66c9cd494c-8hvdw
e3d21975c1b48 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 11 minutes ago Running cloud-spanner-emulator 0 1d5c3407a0954 cloud-spanner-emulator-5b584cc74-psstj
4ee978a90a8ab nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 b88a054245569 nvidia-device-plugin-daemonset-t8s2p
e97d25581fbca c69fa2e9cbf5f 11 minutes ago Running coredns 0 24c635eb81040 coredns-7c65d6cfc9-5wzm7
29d253e8d623a 6e38f40d628db 11 minutes ago Running storage-provisioner 0 92a657b41c8a5 storage-provisioner
d4b4134082f2d 60c005f310ff3 11 minutes ago Running kube-proxy 0 3b0d433197544 kube-proxy-9rf8g
e4810d0b22eb9 6bab7719df100 11 minutes ago Running kube-apiserver 0 f689bd81db477 kube-apiserver-ubuntu-20-agent-14
7c0ce9c202251 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 56ee3378f5d5f kube-controller-manager-ubuntu-20-agent-14
c2d044bdb00e2 2e96e5913fc06 11 minutes ago Running etcd 0 4e232eeccda83 etcd-ubuntu-20-agent-14
d978992a060ad 9aa1fad941575 11 minutes ago Running kube-scheduler 0 0bd91dbfc879b kube-scheduler-ubuntu-20-agent-14
==> coredns [e97d25581fbc] <==
[INFO] 10.244.0.7:38641 - 38636 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151742s
[INFO] 10.244.0.7:50028 - 20629 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090517s
[INFO] 10.244.0.7:50028 - 33680 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137922s
[INFO] 10.244.0.7:58387 - 5177 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000100502s
[INFO] 10.244.0.7:58387 - 41790 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000129977s
[INFO] 10.244.0.7:55145 - 49590 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083581s
[INFO] 10.244.0.7:55145 - 9649 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000121667s
[INFO] 10.244.0.7:56969 - 56607 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000049261s
[INFO] 10.244.0.7:56969 - 59932 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000067922s
[INFO] 10.244.0.7:49267 - 62470 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071772s
[INFO] 10.244.0.7:49267 - 32257 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101661s
[INFO] 10.244.0.22:52451 - 16054 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000315526s
[INFO] 10.244.0.22:59849 - 37057 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000395804s
[INFO] 10.244.0.22:50017 - 61796 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191788s
[INFO] 10.244.0.22:36912 - 37270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000202263s
[INFO] 10.244.0.22:36315 - 45300 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000446114s
[INFO] 10.244.0.22:48482 - 52497 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000513024s
[INFO] 10.244.0.22:35338 - 16884 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003651566s
[INFO] 10.244.0.22:55363 - 8433 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004289097s
[INFO] 10.244.0.22:37918 - 27285 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003006472s
[INFO] 10.244.0.22:36494 - 24992 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003157242s
[INFO] 10.244.0.22:60636 - 52283 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003840319s
[INFO] 10.244.0.22:53898 - 63064 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004447877s
[INFO] 10.244.0.22:34908 - 34030 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001458917s
[INFO] 10.244.0.22:47686 - 37552 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001577081s
==> describe nodes <==
Name: ubuntu-20-agent-14
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-14
kubernetes.io/os=linux
minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T10_21_34_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-14
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-14"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 10:21:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-14
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 10:32:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 10:28:42 +0000 Mon, 23 Sep 2024 10:21:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 10:28:42 +0000 Mon, 23 Sep 2024 10:21:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 10:28:42 +0000 Mon, 23 Sep 2024 10:21:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 10:28:42 +0000 Mon, 23 Sep 2024 10:21:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.150.0.16
Hostname: ubuntu-20-agent-14
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 406ac382-0a98-38ff-f706-d8fe8e823dbb
Boot ID: d3fe8ac7-d9e0-4b15-b63e-3a53514cb0a6
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default cloud-spanner-emulator-5b584cc74-psstj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-w2hzg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-6kvbf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-5wzm7 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-nfj4v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-14 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-14 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-14 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-9rf8g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-14 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-nnrdh 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-t8s2p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-q8vm4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-w9bmc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-x4dtk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-kf48r 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-14 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-14 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-14 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-14 event: Registered Node ubuntu-20-agent-14 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 8e f7 c2 23 6a 08 06
[ +0.036412] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 94 41 71 2e 3c 08 06
[Sep23 10:22] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a ec 0f 99 5e 7c 08 06
[ +0.877496] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 cf b9 0e 2a 24 08 06
[ +1.184288] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 23 a3 a7 df ad 08 06
[ +4.616115] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a df 1e 3a 95 69 08 06
[ +0.071226] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 30 13 a4 f0 02 08 06
[ +0.442683] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 e2 99 96 a8 f5 08 06
[ +4.893312] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff de 97 6d c6 63 7b 08 06
[ +36.870122] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a 72 fc 2d 44 26 08 06
[ +0.044307] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e af f9 da 67 0c 08 06
[Sep23 10:23] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 68 8b a9 f6 46 08 06
[ +0.000506] IPv4: martian source 10.244.0.22 from 10.244.0.5, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff ba 3f d4 2a 70 fd 08 06
==> etcd [c2d044bdb00e] <==
{"level":"info","ts":"2024-09-23T10:21:29.883346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-23T10:21:29.883731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.150.0.16:2379"}
{"level":"info","ts":"2024-09-23T10:21:45.634448Z","caller":"traceutil/trace.go:171","msg":"trace[1639837950] linearizableReadLoop","detail":"{readStateIndex:834; appliedIndex:832; }","duration":"107.425793ms","start":"2024-09-23T10:21:45.527003Z","end":"2024-09-23T10:21:45.634429Z","steps":["trace[1639837950] 'read index received' (duration: 39.791325ms)","trace[1639837950] 'applied index is now lower than readState.Index' (duration: 67.633692ms)"],"step_count":2}
{"level":"info","ts":"2024-09-23T10:21:45.634580Z","caller":"traceutil/trace.go:171","msg":"trace[952413697] transaction","detail":"{read_only:false; response_revision:819; number_of_response:1; }","duration":"108.938233ms","start":"2024-09-23T10:21:45.525609Z","end":"2024-09-23T10:21:45.634547Z","steps":["trace[952413697] 'process raft request' (duration: 108.681745ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-23T10:21:45.634627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.599617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
{"level":"info","ts":"2024-09-23T10:21:45.634680Z","caller":"traceutil/trace.go:171","msg":"trace[1028306469] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:819; }","duration":"107.676755ms","start":"2024-09-23T10:21:45.526994Z","end":"2024-09-23T10:21:45.634671Z","steps":["trace[1028306469] 'agreement among raft nodes before linearized reading' (duration: 107.507775ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-23T10:21:45.885765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.680452ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6572038415507393716 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:751 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:130935 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>","response":"size:16"}
{"level":"info","ts":"2024-09-23T10:21:45.885841Z","caller":"traceutil/trace.go:171","msg":"trace[928141289] transaction","detail":"{read_only:false; response_revision:820; number_of_response:1; }","duration":"245.980017ms","start":"2024-09-23T10:21:45.639850Z","end":"2024-09-23T10:21:45.885830Z","steps":["trace[928141289] 'process raft request' (duration: 125.692368ms)","trace[928141289] 'compare' (duration: 119.500325ms)"],"step_count":2}
{"level":"info","ts":"2024-09-23T10:21:46.080894Z","caller":"traceutil/trace.go:171","msg":"trace[2089597629] linearizableReadLoop","detail":"{readStateIndex:838; appliedIndex:836; }","duration":"185.004422ms","start":"2024-09-23T10:21:45.895871Z","end":"2024-09-23T10:21:46.080875Z","steps":["trace[2089597629] 'read index received' (duration: 61.637409ms)","trace[2089597629] 'applied index is now lower than readState.Index' (duration: 123.366413ms)"],"step_count":2}
{"level":"warn","ts":"2024-09-23T10:21:46.081029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.570257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2024-09-23T10:21:46.081067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.975446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ubuntu-20-agent-14\" ","response":"range_response_count:1 size:5788"}
{"level":"info","ts":"2024-09-23T10:21:46.081157Z","caller":"traceutil/trace.go:171","msg":"trace[358894035] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ubuntu-20-agent-14; range_end:; response_count:1; response_revision:823; }","duration":"119.069826ms","start":"2024-09-23T10:21:45.962076Z","end":"2024-09-23T10:21:46.081146Z","steps":["trace[358894035] 'agreement among raft nodes before linearized reading' (duration: 118.930277ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-23T10:21:46.081088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.208987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
{"level":"info","ts":"2024-09-23T10:21:46.081239Z","caller":"traceutil/trace.go:171","msg":"trace[2131469492] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:823; }","duration":"185.3612ms","start":"2024-09-23T10:21:45.895867Z","end":"2024-09-23T10:21:46.081228Z","steps":["trace[2131469492] 'agreement among raft nodes before linearized reading' (duration: 185.102663ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T10:21:46.081099Z","caller":"traceutil/trace.go:171","msg":"trace[814132744] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"177.654464ms","start":"2024-09-23T10:21:45.903435Z","end":"2024-09-23T10:21:46.081090Z","steps":["trace[814132744] 'agreement among raft nodes before linearized reading' (duration: 177.532411ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T10:21:46.081037Z","caller":"traceutil/trace.go:171","msg":"trace[1422606710] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"186.350105ms","start":"2024-09-23T10:21:45.894674Z","end":"2024-09-23T10:21:46.081024Z","steps":["trace[1422606710] 'process raft request' (duration: 143.806637ms)","trace[1422606710] 'compare' (duration: 42.278558ms)"],"step_count":2}
{"level":"warn","ts":"2024-09-23T10:21:46.081889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.001137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-23T10:21:46.081928Z","caller":"traceutil/trace.go:171","msg":"trace[435805856] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"168.044703ms","start":"2024-09-23T10:21:45.913874Z","end":"2024-09-23T10:21:46.081919Z","steps":["trace[435805856] 'agreement among raft nodes before linearized reading' (duration: 167.191781ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T10:21:48.505502Z","caller":"traceutil/trace.go:171","msg":"trace[337875417] linearizableReadLoop","detail":"{readStateIndex:874; appliedIndex:873; }","duration":"102.12531ms","start":"2024-09-23T10:21:48.403358Z","end":"2024-09-23T10:21:48.505483Z","steps":["trace[337875417] 'read index received' (duration: 102.009164ms)","trace[337875417] 'applied index is now lower than readState.Index' (duration: 115.586µs)"],"step_count":2}
{"level":"info","ts":"2024-09-23T10:21:48.505582Z","caller":"traceutil/trace.go:171","msg":"trace[2090749146] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"103.562838ms","start":"2024-09-23T10:21:48.402005Z","end":"2024-09-23T10:21:48.505568Z","steps":["trace[2090749146] 'process raft request' (duration: 103.367681ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-23T10:21:48.505643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.263667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-23T10:21:48.505688Z","caller":"traceutil/trace.go:171","msg":"trace[347712081] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:858; }","duration":"102.323305ms","start":"2024-09-23T10:21:48.403354Z","end":"2024-09-23T10:21:48.505678Z","steps":["trace[347712081] 'agreement among raft nodes before linearized reading' (duration: 102.229014ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T10:31:30.029497Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1691}
{"level":"info","ts":"2024-09-23T10:31:30.053635Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1691,"took":"23.576898ms","hash":1617186798,"current-db-size-bytes":8450048,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":4345856,"current-db-size-in-use":"4.3 MB"}
{"level":"info","ts":"2024-09-23T10:31:30.053692Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1617186798,"revision":1691,"compact-revision":-1}
==> gcp-auth [c300d56f5b85] <==
2024/09/23 10:23:01 GCP Auth Webhook started!
2024/09/23 10:23:18 Ready to marshal response ...
2024/09/23 10:23:18 Ready to write response ...
2024/09/23 10:23:19 Ready to marshal response ...
2024/09/23 10:23:19 Ready to write response ...
2024/09/23 10:23:41 Ready to marshal response ...
2024/09/23 10:23:41 Ready to write response ...
2024/09/23 10:23:41 Ready to marshal response ...
2024/09/23 10:23:41 Ready to write response ...
2024/09/23 10:23:41 Ready to marshal response ...
2024/09/23 10:23:41 Ready to write response ...
2024/09/23 10:31:54 Ready to marshal response ...
2024/09/23 10:31:54 Ready to write response ...
==> kernel <==
10:32:55 up 15 min, 0 users, load average: 0.67, 0.52, 0.45
Linux ubuntu-20-agent-14 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [e4810d0b22eb] <==
W0923 10:22:18.838865 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.192.7:443: connect: connection refused
W0923 10:22:19.940734 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.192.7:443: connect: connection refused
W0923 10:22:27.204977 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
E0923 10:22:27.205013 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
W0923 10:22:49.216113 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
E0923 10:22:49.216157 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
W0923 10:22:49.223833 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.224.56:443: connect: connection refused
E0923 10:22:49.223872 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.224.56:443: connect: connection refused" logger="UnhandledError"
I0923 10:23:19.040942 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0923 10:23:19.058515 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0923 10:23:31.478429 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0923 10:23:31.491944 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0923 10:23:31.594148 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 10:23:31.610423 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0923 10:23:31.679679 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 10:23:31.813688 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 10:23:31.887723 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 10:23:31.914330 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0923 10:23:32.639302 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0923 10:23:32.670531 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0923 10:23:32.680769 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0923 10:23:32.868505 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0923 10:23:32.887887 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0923 10:23:32.914817 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0923 10:23:33.075675 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [7c0ce9c20225] <==
W0923 10:31:43.167394 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:31:43.167435 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:31:46.114036 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:31:46.114083 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:31:49.827875 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:31:49.827916 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:31:50.460903 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:31:50.460943 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:03.556165 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:03.556207 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:07.178480 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:07.178536 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:24.080503 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:24.080550 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:24.343264 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:24.343314 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:35.718308 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:35.718362 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:40.274545 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:40.274611 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:41.055093 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:41.055142 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 10:32:46.510382 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 10:32:46.510421 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 10:32:54.523327 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="14.962µs"
==> kube-proxy [d4b4134082f2] <==
I0923 10:21:39.802906 1 server_linux.go:66] "Using iptables proxy"
I0923 10:21:39.972051 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.150.0.16"]
E0923 10:21:39.972326 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 10:21:40.101058 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 10:21:40.101112 1 server_linux.go:169] "Using iptables Proxier"
I0923 10:21:40.114341 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 10:21:40.114712 1 server.go:483] "Version info" version="v1.31.1"
I0923 10:21:40.114740 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 10:21:40.116329 1 config.go:199] "Starting service config controller"
I0923 10:21:40.116359 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 10:21:40.116407 1 config.go:105] "Starting endpoint slice config controller"
I0923 10:21:40.116414 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 10:21:40.116407 1 config.go:328] "Starting node config controller"
I0923 10:21:40.118507 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 10:21:40.216805 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0923 10:21:40.216885 1 shared_informer.go:320] Caches are synced for service config
I0923 10:21:40.219023 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [d978992a060a] <==
W0923 10:21:30.961743 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0923 10:21:30.961782 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:30.961119 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0923 10:21:30.961819 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:30.961819 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:30.961856 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.800175 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:31.800218 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.877035 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0923 10:21:31.877077 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.891653 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0923 10:21:31.891696 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.951064 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0923 10:21:31.951104 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.968057 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 10:21:31.968098 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 10:21:31.998608 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:31.998650 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.085021 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 10:21:32.085071 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.116576 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0923 10:21:32.116623 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 10:21:32.138327 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0923 10:21:32.138371 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0923 10:21:34.657508 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Mon 2024-08-26 20:51:04 UTC, end at Mon 2024-09-23 10:32:55 UTC. --
Sep 23 10:32:42 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:42.761287 15946 scope.go:117] "RemoveContainer" containerID="19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814"
Sep 23 10:32:42 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:42.761466 15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-w2hzg_gadget(77a823d2-c0ec-4f9d-b418-c9bac6c68b52)\"" pod="gadget/gadget-w2hzg" podUID="77a823d2-c0ec-4f9d-b418-c9bac6c68b52"
Sep 23 10:32:48 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:48.763823 15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="ef59238e-5f40-4a5e-aae7-54d679f8081f"
Sep 23 10:32:50 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:50.763330 15946 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="5921de05-3259-4ce4-9d6c-d4a86d7540fa"
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385008 15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x4z\" (UniqueName: \"kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z\") pod \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\" (UID: \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\") "
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385071 15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds\") pod \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\" (UID: \"5921de05-3259-4ce4-9d6c-d4a86d7540fa\") "
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.385183 15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5921de05-3259-4ce4-9d6c-d4a86d7540fa" (UID: "5921de05-3259-4ce4-9d6c-d4a86d7540fa"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.387044 15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z" (OuterVolumeSpecName: "kube-api-access-27x4z") pod "5921de05-3259-4ce4-9d6c-d4a86d7540fa" (UID: "5921de05-3259-4ce4-9d6c-d4a86d7540fa"). InnerVolumeSpecName "kube-api-access-27x4z". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.485626 15946 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5921de05-3259-4ce4-9d6c-d4a86d7540fa-gcp-creds\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.485670 15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-27x4z\" (UniqueName: \"kubernetes.io/projected/5921de05-3259-4ce4-9d6c-d4a86d7540fa-kube-api-access-27x4z\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.761163 15946 scope.go:117] "RemoveContainer" containerID="19bc9bdaffa6ca1785506c4e9a9ebb2a8ba015cf365fda6d059b5d3a6aec0814"
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.888525 15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj4jr\" (UniqueName: \"kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr\") pod \"678aa223-edb6-4a6c-b3e5-5d95e0ea40f6\" (UID: \"678aa223-edb6-4a6c-b3e5-5d95e0ea40f6\") "
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.894587 15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr" (OuterVolumeSpecName: "kube-api-access-mj4jr") pod "678aa223-edb6-4a6c-b3e5-5d95e0ea40f6" (UID: "678aa223-edb6-4a6c-b3e5-5d95e0ea40f6"). InnerVolumeSpecName "kube-api-access-mj4jr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.989703 15946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtpmn\" (UniqueName: \"kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn\") pod \"35894a53-f7e8-4743-9eea-200f3986fcd6\" (UID: \"35894a53-f7e8-4743-9eea-200f3986fcd6\") "
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.989816 15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mj4jr\" (UniqueName: \"kubernetes.io/projected/678aa223-edb6-4a6c-b3e5-5d95e0ea40f6-kube-api-access-mj4jr\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
Sep 23 10:32:54 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:54.992148 15946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn" (OuterVolumeSpecName: "kube-api-access-wtpmn") pod "35894a53-f7e8-4743-9eea-200f3986fcd6" (UID: "35894a53-f7e8-4743-9eea-200f3986fcd6"). InnerVolumeSpecName "kube-api-access-wtpmn". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.090290 15946 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wtpmn\" (UniqueName: \"kubernetes.io/projected/35894a53-f7e8-4743-9eea-200f3986fcd6-kube-api-access-wtpmn\") on node \"ubuntu-20-agent-14\" DevicePath \"\""
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.359324 15946 scope.go:117] "RemoveContainer" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.378503 15946 scope.go:117] "RemoveContainer" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:55.379325 15946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef" containerID="c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.379365 15946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"} err="failed to get container status \"c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef\": rpc error: code = Unknown desc = Error response from daemon: No such container: c46422e23a32ee01a48dd8a40fec2f7ba74c60cf06f9e465fa6102a132dd18ef"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.379391 15946 scope.go:117] "RemoveContainer" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.399148 15946 scope.go:117] "RemoveContainer" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: E0923 10:32:55.400117 15946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90" containerID="904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
Sep 23 10:32:55 ubuntu-20-agent-14 kubelet[15946]: I0923 10:32:55.400188 15946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"} err="failed to get container status \"904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90\": rpc error: code = Unknown desc = Error response from daemon: No such container: 904225ddf913e30312b72371d03db663ac105837e128f92c30b8e687ecc0bc90"
==> storage-provisioner [29d253e8d623] <==
I0923 10:21:40.339278 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 10:21:40.349639 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 10:21:40.349676 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 10:21:40.360782 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 10:21:40.360971 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee!
I0923 10:21:40.362124 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8e306d8-c1db-43d5-8589-ceef2d9d7cac", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee became leader
I0923 10:21:40.461157 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-14_a2863585-6319-4866-8b5f-dec1261c04ee!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-14/10.150.0.16
Start Time: Mon, 23 Sep 2024 10:23:41 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.24
IPs:
IP: 10.244.0.24
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrdnh (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-vrdnh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m15s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-14
Normal Pulling 7m49s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m49s (x4 over 9m14s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m49s (x4 over 9m14s) kubelet Error: ErrImagePull
Warning Failed 7m36s (x6 over 9m14s) kubelet Error: ImagePullBackOff
Normal BackOff 4m14s (x20 over 9m14s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.01s)