=== RUN TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.567618ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002964795s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00343927s
addons_test.go:342: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.076676838s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/08/28 17:03:23 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | --download-only -p | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:35429 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:53 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.33.1 | 28 Aug 24 16:54 UTC | 28 Aug 24 16:54 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/28 16:51:51
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0828 16:51:51.231414 20719 out.go:345] Setting OutFile to fd 1 ...
I0828 16:51:51.231860 20719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 16:51:51.231873 20719 out.go:358] Setting ErrFile to fd 2...
I0828 16:51:51.231880 20719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 16:51:51.232335 20719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10135/.minikube/bin
I0828 16:51:51.233317 20719 out.go:352] Setting JSON to false
I0828 16:51:51.234183 20719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2058,"bootTime":1724861853,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0828 16:51:51.234244 20719 start.go:139] virtualization: kvm guest
I0828 16:51:51.236281 20719 out.go:177] * minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0828 16:51:51.237558 20719 out.go:177] - MINIKUBE_LOCATION=19529
W0828 16:51:51.237544 20719 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10135/.minikube/cache/preloaded-tarball: no such file or directory
I0828 16:51:51.237629 20719 notify.go:220] Checking for updates...
I0828 16:51:51.239827 20719 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0828 16:51:51.241048 20719 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19529-10135/kubeconfig
I0828 16:51:51.242264 20719 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10135/.minikube
I0828 16:51:51.243599 20719 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0828 16:51:51.244831 20719 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0828 16:51:51.246024 20719 driver.go:392] Setting default libvirt URI to qemu:///system
I0828 16:51:51.256430 20719 out.go:177] * Using the none driver based on user configuration
I0828 16:51:51.257598 20719 start.go:297] selected driver: none
I0828 16:51:51.257615 20719 start.go:901] validating driver "none" against <nil>
I0828 16:51:51.257625 20719 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0828 16:51:51.257650 20719 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0828 16:51:51.257919 20719 out.go:270] ! The 'none' driver does not respect the --memory flag
I0828 16:51:51.258407 20719 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0828 16:51:51.258608 20719 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0828 16:51:51.258669 20719 cni.go:84] Creating CNI manager for ""
I0828 16:51:51.258684 20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:51:51.258693 20719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0828 16:51:51.258733 20719 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0828 16:51:51.260156 20719 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0828 16:51:51.261613 20719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json ...
I0828 16:51:51.261643 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json: {Name:mk6eedcd645d9a68ea0fc579a6e53955b25745c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:51.261767 20719 start.go:360] acquireMachinesLock for minikube: {Name:mka1638c483578dda3e9e32334e7fbf26da86364 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0828 16:51:51.261795 20719 start.go:364] duration metric: took 16.183µs to acquireMachinesLock for "minikube"
I0828 16:51:51.261807 20719 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0828 16:51:51.261861 20719 start.go:125] createHost starting for "" (driver="none")
I0828 16:51:51.263199 20719 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0828 16:51:51.264236 20719 exec_runner.go:51] Run: systemctl --version
I0828 16:51:51.266620 20719 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0828 16:51:51.266649 20719 client.go:168] LocalClient.Create starting
I0828 16:51:51.266694 20719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca.pem
I0828 16:51:51.266721 20719 main.go:141] libmachine: Decoding PEM data...
I0828 16:51:51.266734 20719 main.go:141] libmachine: Parsing certificate...
I0828 16:51:51.266782 20719 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10135/.minikube/certs/cert.pem
I0828 16:51:51.266804 20719 main.go:141] libmachine: Decoding PEM data...
I0828 16:51:51.266817 20719 main.go:141] libmachine: Parsing certificate...
I0828 16:51:51.267098 20719 client.go:171] duration metric: took 443.007µs to LocalClient.Create
I0828 16:51:51.267121 20719 start.go:167] duration metric: took 502.007µs to libmachine.API.Create "minikube"
I0828 16:51:51.267129 20719 start.go:293] postStartSetup for "minikube" (driver="none")
I0828 16:51:51.267166 20719 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0828 16:51:51.267194 20719 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0828 16:51:51.275957 20719 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0828 16:51:51.275975 20719 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0828 16:51:51.275983 20719 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0828 16:51:51.277663 20719 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0828 16:51:51.278861 20719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10135/.minikube/addons for local assets ...
I0828 16:51:51.278908 20719 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10135/.minikube/files for local assets ...
I0828 16:51:51.278944 20719 start.go:296] duration metric: took 11.810407ms for postStartSetup
I0828 16:51:51.279543 20719 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/config.json ...
I0828 16:51:51.279669 20719 start.go:128] duration metric: took 17.800033ms to createHost
I0828 16:51:51.279677 20719 start.go:83] releasing machines lock for "minikube", held for 17.874919ms
I0828 16:51:51.280071 20719 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0828 16:51:51.280167 20719 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0828 16:51:51.281936 20719 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0828 16:51:51.281998 20719 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0828 16:51:51.292092 20719 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0828 16:51:51.292120 20719 start.go:495] detecting cgroup driver to use...
I0828 16:51:51.292147 20719 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0828 16:51:51.292247 20719 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0828 16:51:51.310228 20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0828 16:51:51.318322 20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0828 16:51:51.326110 20719 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0828 16:51:51.326159 20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0828 16:51:51.334638 20719 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0828 16:51:51.342598 20719 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0828 16:51:51.351019 20719 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0828 16:51:51.359543 20719 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0828 16:51:51.367618 20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0828 16:51:51.375653 20719 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0828 16:51:51.383473 20719 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0828 16:51:51.391495 20719 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0828 16:51:51.399194 20719 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0828 16:51:51.407182 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:51:51.618874 20719 exec_runner.go:51] Run: sudo systemctl restart containerd
I0828 16:51:51.684008 20719 start.go:495] detecting cgroup driver to use...
I0828 16:51:51.684061 20719 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0828 16:51:51.684179 20719 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0828 16:51:51.702599 20719 exec_runner.go:51] Run: which cri-dockerd
I0828 16:51:51.703469 20719 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0828 16:51:51.710580 20719 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0828 16:51:51.710602 20719 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0828 16:51:51.710637 20719 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0828 16:51:51.717255 20719 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0828 16:51:51.717386 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2133849740 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0828 16:51:51.724625 20719 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0828 16:51:51.942585 20719 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0828 16:51:52.163123 20719 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0828 16:51:52.163277 20719 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0828 16:51:52.163295 20719 exec_runner.go:203] rm: /etc/docker/daemon.json
I0828 16:51:52.163337 20719 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0828 16:51:52.171067 20719 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0828 16:51:52.171189 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1923619406 /etc/docker/daemon.json
I0828 16:51:52.179377 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:51:52.386501 20719 exec_runner.go:51] Run: sudo systemctl restart docker
I0828 16:51:52.686175 20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0828 16:51:52.696560 20719 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0828 16:51:52.710614 20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0828 16:51:52.720447 20719 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0828 16:51:52.942591 20719 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0828 16:51:53.161219 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:51:53.372571 20719 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0828 16:51:53.385665 20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0828 16:51:53.395311 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:51:53.610955 20719 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0828 16:51:53.675414 20719 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0828 16:51:53.675471 20719 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0828 16:51:53.677136 20719 start.go:563] Will wait 60s for crictl version
I0828 16:51:53.677179 20719 exec_runner.go:51] Run: which crictl
I0828 16:51:53.678007 20719 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0828 16:51:53.708425 20719 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.0
RuntimeApiVersion: v1
I0828 16:51:53.708494 20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0828 16:51:53.729056 20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0828 16:51:53.751553 20719 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
I0828 16:51:53.751635 20719 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0828 16:51:53.754356 20719 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0828 16:51:53.755538 20719 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0828 16:51:53.755643 20719 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0828 16:51:53.755659 20719 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.0 docker true true} ...
I0828 16:51:53.755771 20719 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0828 16:51:53.755814 20719 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0828 16:51:53.799394 20719 cni.go:84] Creating CNI manager for ""
I0828 16:51:53.799422 20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:51:53.799432 20719 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0828 16:51:53.799453 20719 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0828 16:51:53.799599 20719 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0828 16:51:53.799650 20719 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0828 16:51:53.807722 20719 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
Initiating transfer...
I0828 16:51:53.807766 20719 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
I0828 16:51:53.815038 20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
I0828 16:51:53.815079 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
I0828 16:51:53.815042 20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
I0828 16:51:53.815188 20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0828 16:51:53.815041 20719 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
I0828 16:51:53.815334 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
I0828 16:51:53.826025 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
I0828 16:51:53.864328 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4187861228 /var/lib/minikube/binaries/v1.31.0/kubeadm
I0828 16:51:53.864682 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2546517855 /var/lib/minikube/binaries/v1.31.0/kubectl
I0828 16:51:53.890782 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1422019770 /var/lib/minikube/binaries/v1.31.0/kubelet
I0828 16:51:53.955315 20719 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0828 16:51:53.963124 20719 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0828 16:51:53.963142 20719 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0828 16:51:53.963176 20719 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0828 16:51:53.970109 20719 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0828 16:51:53.970227 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3284635942 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0828 16:51:53.978324 20719 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0828 16:51:53.978340 20719 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0828 16:51:53.978368 20719 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0828 16:51:53.984970 20719 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0828 16:51:53.985080 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3543767253 /lib/systemd/system/kubelet.service
I0828 16:51:53.992267 20719 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0828 16:51:53.992383 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3937683150 /var/tmp/minikube/kubeadm.yaml.new
I0828 16:51:53.999883 20719 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0828 16:51:54.001103 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:51:54.226551 20719 exec_runner.go:51] Run: sudo systemctl start kubelet
I0828 16:51:54.240413 20719 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube for IP: 10.138.0.48
I0828 16:51:54.240434 20719 certs.go:194] generating shared ca certs ...
I0828 16:51:54.240451 20719 certs.go:226] acquiring lock for ca certs: {Name:mk22a8c2144a7d6964e6b37d9c80e4caab8a8dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:54.240562 20719 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.key
I0828 16:51:54.240599 20719 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.key
I0828 16:51:54.240608 20719 certs.go:256] generating profile certs ...
I0828 16:51:54.240652 20719 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key
I0828 16:51:54.240671 20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt with IP's: []
I0828 16:51:54.547376 20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt ...
I0828 16:51:54.547404 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.crt: {Name:mk8a49ed35fd7d9762460679b6e6a57ffafbaf7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:54.547532 20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key ...
I0828 16:51:54.547542 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/client.key: {Name:mkd9775e3f45fbfe9210db2a820d59a821d9e905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:54.547599 20719 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a
I0828 16:51:54.547612 20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0828 16:51:55.027398 20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0828 16:51:55.027428 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk15fd5643820644a01d331bdfed69765092ed16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:55.027542 20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0828 16:51:55.027555 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk222daf87a02fdad80d6658a8192118ed1581cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:55.027604 20719 certs.go:381] copying /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt
I0828 16:51:55.027687 20719 certs.go:385] copying /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key
I0828 16:51:55.027771 20719 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key
I0828 16:51:55.027788 20719 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0828 16:51:55.169228 20719 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt ...
I0828 16:51:55.169261 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt: {Name:mk348a585877396cb99763220bc3a49ab69d3498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:55.169397 20719 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key ...
I0828 16:51:55.169408 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key: {Name:mk7fd98c26b17b3d7b3c49fdcd98d024fde067c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:51:55.169555 20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca-key.pem (1679 bytes)
I0828 16:51:55.169587 20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/ca.pem (1078 bytes)
I0828 16:51:55.169610 20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/cert.pem (1123 bytes)
I0828 16:51:55.169639 20719 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10135/.minikube/certs/key.pem (1675 bytes)
I0828 16:51:55.170156 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0828 16:51:55.170277 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1568032011 /var/lib/minikube/certs/ca.crt
I0828 16:51:55.178503 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0828 16:51:55.178626 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3742733851 /var/lib/minikube/certs/ca.key
I0828 16:51:55.185909 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0828 16:51:55.186022 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1769574544 /var/lib/minikube/certs/proxy-client-ca.crt
I0828 16:51:55.194410 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0828 16:51:55.194516 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2769527872 /var/lib/minikube/certs/proxy-client-ca.key
I0828 16:51:55.201948 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0828 16:51:55.202039 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3109992943 /var/lib/minikube/certs/apiserver.crt
I0828 16:51:55.210079 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0828 16:51:55.210176 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2277576273 /var/lib/minikube/certs/apiserver.key
I0828 16:51:55.217395 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0828 16:51:55.217502 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1509678444 /var/lib/minikube/certs/proxy-client.crt
I0828 16:51:55.225851 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0828 16:51:55.225945 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1059749772 /var/lib/minikube/certs/proxy-client.key
I0828 16:51:55.233251 20719 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0828 16:51:55.233265 20719 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.233290 20719 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.240690 20719 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0828 16:51:55.240801 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3299571709 /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.249775 20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0828 16:51:55.249882 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2311913383 /var/lib/minikube/kubeconfig
I0828 16:51:55.257237 20719 exec_runner.go:51] Run: openssl version
I0828 16:51:55.260053 20719 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0828 16:51:55.268072 20719 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.269294 20719 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.269333 20719 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0828 16:51:55.272136 20719 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0828 16:51:55.279188 20719 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0828 16:51:55.280278 20719 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0828 16:51:55.280316 20719 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0828 16:51:55.280410 20719 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0828 16:51:55.295810 20719 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0828 16:51:55.304379 20719 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0828 16:51:55.312252 20719 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0828 16:51:55.331216 20719 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0828 16:51:55.339132 20719 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0828 16:51:55.339150 20719 kubeadm.go:157] found existing configuration files:
I0828 16:51:55.339183 20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0828 16:51:55.346872 20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0828 16:51:55.346906 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0828 16:51:55.353498 20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0828 16:51:55.360589 20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0828 16:51:55.360622 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0828 16:51:55.367271 20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0828 16:51:55.374395 20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0828 16:51:55.374427 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0828 16:51:55.381410 20719 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0828 16:51:55.388202 20719 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0828 16:51:55.388235 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0828 16:51:55.394891 20719 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0828 16:51:55.426450 20719 kubeadm.go:310] W0828 16:51:55.426343 21608 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0828 16:51:55.426915 20719 kubeadm.go:310] W0828 16:51:55.426864 21608 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0828 16:51:55.428447 20719 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0828 16:51:55.428501 20719 kubeadm.go:310] [preflight] Running pre-flight checks
I0828 16:51:55.518580 20719 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0828 16:51:55.518671 20719 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0828 16:51:55.518683 20719 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0828 16:51:55.518688 20719 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0828 16:51:55.528437 20719 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0828 16:51:55.532115 20719 out.go:235] - Generating certificates and keys ...
I0828 16:51:55.532158 20719 kubeadm.go:310] [certs] Using existing ca certificate authority
I0828 16:51:55.532169 20719 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0828 16:51:55.679237 20719 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0828 16:51:55.913438 20719 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0828 16:51:56.044209 20719 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0828 16:51:56.116265 20719 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0828 16:51:56.177109 20719 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0828 16:51:56.177218 20719 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0828 16:51:56.292701 20719 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0828 16:51:56.292786 20719 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0828 16:51:56.414644 20719 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0828 16:51:56.560775 20719 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0828 16:51:56.798743 20719 kubeadm.go:310] [certs] Generating "sa" key and public key
I0828 16:51:56.798857 20719 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0828 16:51:56.861023 20719 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0828 16:51:56.988243 20719 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0828 16:51:57.079274 20719 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0828 16:51:57.772259 20719 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0828 16:51:57.843373 20719 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0828 16:51:57.843938 20719 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0828 16:51:57.846128 20719 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0828 16:51:57.848145 20719 out.go:235] - Booting up control plane ...
I0828 16:51:57.848165 20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0828 16:51:57.848178 20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0828 16:51:57.848655 20719 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0828 16:51:57.875912 20719 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0828 16:51:57.880006 20719 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0828 16:51:57.880037 20719 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0828 16:51:58.111341 20719 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0828 16:51:58.111364 20719 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0828 16:51:59.113061 20719 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687898s
I0828 16:51:59.113802 20719 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0828 16:52:03.114469 20719 kubeadm.go:310] [api-check] The API server is healthy after 4.00126966s
I0828 16:52:03.125573 20719 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0828 16:52:03.135324 20719 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0828 16:52:03.150393 20719 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0828 16:52:03.150415 20719 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0828 16:52:03.157152 20719 kubeadm.go:310] [bootstrap-token] Using token: xipvp4.cbrr22unyblglu5o
I0828 16:52:03.158513 20719 out.go:235] - Configuring RBAC rules ...
I0828 16:52:03.158544 20719 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0828 16:52:03.161349 20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0828 16:52:03.166082 20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0828 16:52:03.168253 20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0828 16:52:03.170472 20719 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0828 16:52:03.173409 20719 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0828 16:52:03.520867 20719 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0828 16:52:03.940900 20719 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0828 16:52:04.519929 20719 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0828 16:52:04.520734 20719 kubeadm.go:310]
I0828 16:52:04.520743 20719 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0828 16:52:04.520747 20719 kubeadm.go:310]
I0828 16:52:04.520752 20719 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0828 16:52:04.520756 20719 kubeadm.go:310]
I0828 16:52:04.520760 20719 kubeadm.go:310] mkdir -p $HOME/.kube
I0828 16:52:04.520765 20719 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0828 16:52:04.520768 20719 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0828 16:52:04.520779 20719 kubeadm.go:310]
I0828 16:52:04.520783 20719 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0828 16:52:04.520787 20719 kubeadm.go:310]
I0828 16:52:04.520790 20719 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0828 16:52:04.520793 20719 kubeadm.go:310]
I0828 16:52:04.520795 20719 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0828 16:52:04.520798 20719 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0828 16:52:04.520800 20719 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0828 16:52:04.520802 20719 kubeadm.go:310]
I0828 16:52:04.520808 20719 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0828 16:52:04.520811 20719 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0828 16:52:04.520814 20719 kubeadm.go:310]
I0828 16:52:04.520826 20719 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xipvp4.cbrr22unyblglu5o \
I0828 16:52:04.520831 20719 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:eb28cbd695d699ba2c523a84ccf8745e98e5bae9700dcd7a322a5ef1a0c7d4c2 \
I0828 16:52:04.520846 20719 kubeadm.go:310] --control-plane
I0828 16:52:04.520850 20719 kubeadm.go:310]
I0828 16:52:04.520854 20719 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0828 16:52:04.520858 20719 kubeadm.go:310]
I0828 16:52:04.520862 20719 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xipvp4.cbrr22unyblglu5o \
I0828 16:52:04.520866 20719 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:eb28cbd695d699ba2c523a84ccf8745e98e5bae9700dcd7a322a5ef1a0c7d4c2
I0828 16:52:04.523903 20719 cni.go:84] Creating CNI manager for ""
I0828 16:52:04.523934 20719 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0828 16:52:04.525781 20719 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0828 16:52:04.526837 20719 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0828 16:52:04.537152 20719 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0828 16:52:04.537284 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2018640280 /etc/cni/net.d/1-k8s.conflist
I0828 16:52:04.547379 20719 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0828 16:52:04.547423 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:04.547469 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_08_28T16_52_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0828 16:52:04.555796 20719 ops.go:34] apiserver oom_adj: -16
I0828 16:52:04.610461 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:05.110749 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:05.610574 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:06.110768 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:06.611271 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:07.111504 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:07.611028 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:08.110773 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:08.611015 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:09.110604 20719 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0828 16:52:09.185268 20719 kubeadm.go:1113] duration metric: took 4.637890861s to wait for elevateKubeSystemPrivileges
I0828 16:52:09.185308 20719 kubeadm.go:394] duration metric: took 13.904987542s to StartCluster
I0828 16:52:09.185334 20719 settings.go:142] acquiring lock: {Name:mk01918bb9c900e7329bdae41560e39d087de7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:09.185397 20719 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19529-10135/kubeconfig
I0828 16:52:09.185972 20719 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10135/kubeconfig: {Name:mk291fc5a30453909f461e99fae52dabc9fc4c55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0828 16:52:09.186146 20719 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0828 16:52:09.186238 20719 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0828 16:52:09.186417 20719 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 16:52:09.186432 20719 addons.go:69] Setting metrics-server=true in profile "minikube"
I0828 16:52:09.186423 20719 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0828 16:52:09.186439 20719 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0828 16:52:09.186457 20719 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0828 16:52:09.186468 20719 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0828 16:52:09.186470 20719 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0828 16:52:09.186478 20719 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0828 16:52:09.186487 20719 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0828 16:52:09.186491 20719 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0828 16:52:09.186490 20719 addons.go:69] Setting registry=true in profile "minikube"
I0828 16:52:09.186505 20719 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0828 16:52:09.186515 20719 addons.go:234] Setting addon registry=true in "minikube"
I0828 16:52:09.186517 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.186523 20719 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0828 16:52:09.186544 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.186548 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187044 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187061 20719 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0828 16:52:09.187073 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.187102 20719 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0828 16:52:09.187119 20719 addons.go:69] Setting volcano=true in profile "minikube"
I0828 16:52:09.187119 20719 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0828 16:52:09.187132 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187142 20719 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0828 16:52:09.187144 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187160 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.186472 20719 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0828 16:52:09.187198 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.186459 20719 addons.go:234] Setting addon metrics-server=true in "minikube"
I0828 16:52:09.186482 20719 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0828 16:52:09.187328 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187339 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187165 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187142 20719 addons.go:234] Setting addon volcano=true in "minikube"
I0828 16:52:09.187549 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.187746 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187104 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187759 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.187769 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.187790 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.187804 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.187922 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187929 20719 out.go:177] * Configuring local host environment ...
I0828 16:52:09.187934 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187046 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187934 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.188034 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.188065 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.188068 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.187182 20719 mustload.go:65] Loading cluster: minikube
I0828 16:52:09.188161 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.188172 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.188198 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.188266 20719 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 16:52:09.187944 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.188323 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.186421 20719 addons.go:69] Setting yakd=true in profile "minikube"
I0828 16:52:09.188523 20719 addons.go:234] Setting addon yakd=true in "minikube"
I0828 16:52:09.188552 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.188729 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.188751 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.188780 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.189129 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.189152 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.189181 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.186481 20719 addons.go:69] Setting helm-tiller=true in profile "minikube"
I0828 16:52:09.189738 20719 addons.go:234] Setting addon helm-tiller=true in "minikube"
I0828 16:52:09.189778 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.190407 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.190433 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.190461 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0828 16:52:09.190731 20719 out.go:270] *
I0828 16:52:09.187109 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.187144 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.187940 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.186464 20719 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
W0828 16:52:09.192011 20719 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
I0828 16:52:09.192019 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.192029 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.192057 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.192061 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0828 16:52:09.192020 20719 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0828 16:52:09.192324 20719 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0828 16:52:09.192335 20719 out.go:270] *
I0828 16:52:09.192020 20719 host.go:66] Checking if "minikube" exists ...
W0828 16:52:09.192419 20719 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0828 16:52:09.192433 20719 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0828 16:52:09.192440 20719 out.go:270] *
W0828 16:52:09.192467 20719 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0828 16:52:09.192480 20719 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0828 16:52:09.192493 20719 out.go:270] *
W0828 16:52:09.192503 20719 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0828 16:52:09.192532 20719 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0828 16:52:09.192994 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.193011 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.193034 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.194174 20719 out.go:177] * Verifying Kubernetes components...
I0828 16:52:09.198297 20719 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0828 16:52:09.210135 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.213218 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.214472 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.215938 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.228945 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.229182 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.229647 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.230037 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.230527 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.236667 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.236781 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.242923 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.244332 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.244737 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.246966 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.247019 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.247893 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.247943 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.248967 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.252024 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.252081 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.258798 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.258890 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.263203 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.263257 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.267641 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.267711 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.267960 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.268600 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.268623 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.271791 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.271887 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.272959 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.273008 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.274117 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.274168 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.275334 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.276567 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.276625 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.278332 20719 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0828 16:52:09.279618 20719 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0828 16:52:09.279648 20719 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0828 16:52:09.279982 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1216619049 /etc/kubernetes/addons/yakd-ns.yaml
I0828 16:52:09.280194 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.280216 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.287889 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.287911 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.287892 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.288296 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.289198 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.290657 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.290678 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.291831 20719 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0828 16:52:09.292170 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.292188 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.292412 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.292429 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.292889 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.292933 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.294316 20719 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0828 16:52:09.295105 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.295121 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.296930 20719 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0828 16:52:09.297044 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.298449 20719 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0828 16:52:09.298465 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.298474 20719 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0828 16:52:09.298590 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1323458966 /etc/kubernetes/addons/yakd-sa.yaml
I0828 16:52:09.299384 20719 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0828 16:52:09.299417 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0828 16:52:09.299941 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3148500674 /etc/kubernetes/addons/volcano-deployment.yaml
I0828 16:52:09.300111 20719 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0828 16:52:09.300938 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.300959 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.301179 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.301190 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.302779 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.303920 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.303943 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.304806 20719 out.go:177] - Using image docker.io/registry:2.8.3
I0828 16:52:09.304935 20719 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0828 16:52:09.304956 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0828 16:52:09.307197 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube577144511 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0828 16:52:09.307395 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.307452 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.308108 20719 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0828 16:52:09.308209 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.309346 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.309972 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.309916 20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0828 16:52:09.310357 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0828 16:52:09.310832 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.311468 20719 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0828 16:52:09.317201 20719 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0828 16:52:09.317436 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0828 16:52:09.317798 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube241128223 /etc/kubernetes/addons/registry-rc.yaml
I0828 16:52:09.320137 20719 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0828 16:52:09.320615 20719 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0828 16:52:09.321157 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0828 16:52:09.321492 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1528453054 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0828 16:52:09.321697 20719 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0828 16:52:09.321720 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0828 16:52:09.322353 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0828 16:52:09.322386 20719 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0828 16:52:09.322506 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2230331267 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0828 16:52:09.324305 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0828 16:52:09.325623 20719 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0828 16:52:09.325656 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0828 16:52:09.325661 20719 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0828 16:52:09.326945 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0828 16:52:09.327070 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384429 /etc/kubernetes/addons/yakd-crb.yaml
I0828 16:52:09.327370 20719 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0828 16:52:09.327401 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.326964 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3346000782 /etc/kubernetes/addons/deployment.yaml
I0828 16:52:09.328807 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.328827 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.328859 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.330111 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0828 16:52:09.331402 20719 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0828 16:52:09.332574 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.332593 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.334100 20719 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0828 16:52:09.336265 20719 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0828 16:52:09.337872 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0828 16:52:09.337912 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0828 16:52:09.338832 20719 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0828 16:52:09.338855 20719 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0828 16:52:09.344564 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.344598 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.349463 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0828 16:52:09.349624 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.349637 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.349958 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.350283 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.350339 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.351078 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3454000902 /etc/kubernetes/addons/registry-svc.yaml
I0828 16:52:09.351527 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube816012186 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0828 16:52:09.351596 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0828 16:52:09.351961 20719 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
I0828 16:52:09.353262 20719 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0828 16:52:09.353294 20719 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0828 16:52:09.355415 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1153670770 /etc/kubernetes/addons/ig-namespace.yaml
I0828 16:52:09.357159 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.358230 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.359763 20719 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0828 16:52:09.360920 20719 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:09.360938 20719 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0828 16:52:09.360945 20719 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:09.360982 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:09.367824 20719 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0828 16:52:09.367840 20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0828 16:52:09.367847 20719 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0828 16:52:09.367862 20719 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0828 16:52:09.367917 20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0828 16:52:09.367933 20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0828 16:52:09.367979 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3348239087 /etc/kubernetes/addons/yakd-svc.yaml
I0828 16:52:09.368047 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1789194110 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0828 16:52:09.369972 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.371120 20719 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0828 16:52:09.371158 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:09.371872 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:09.371891 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:09.371924 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:09.372182 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3211966672 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0828 16:52:09.374379 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0828 16:52:09.376872 20719 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0828 16:52:09.376903 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0828 16:52:09.377024 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3449337573 /etc/kubernetes/addons/registry-proxy.yaml
I0828 16:52:09.384168 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.384308 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.384657 20719 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0828 16:52:09.384716 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0828 16:52:09.384733 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0828 16:52:09.384804 20719 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0828 16:52:09.385347 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3756036716 /etc/kubernetes/addons/rbac-hostpath.yaml
I0828 16:52:09.385973 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2512930758 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0828 16:52:09.386523 20719 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0828 16:52:09.390166 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0828 16:52:09.390403 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3066196383 /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:09.391884 20719 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0828 16:52:09.391914 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0828 16:52:09.392017 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3448776941 /etc/kubernetes/addons/yakd-dp.yaml
I0828 16:52:09.393082 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.396035 20719 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0828 16:52:09.396060 20719 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0828 16:52:09.396167 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube445499146 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0828 16:52:09.399052 20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0828 16:52:09.399081 20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0828 16:52:09.399191 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2707607647 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0828 16:52:09.400736 20719 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0828 16:52:09.403433 20719 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0828 16:52:09.403462 20719 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0828 16:52:09.403599 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube495669776 /etc/kubernetes/addons/metrics-apiservice.yaml
I0828 16:52:09.417129 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.417199 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.420638 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0828 16:52:09.421373 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0828 16:52:09.424394 20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0828 16:52:09.424433 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0828 16:52:09.424565 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1570379995 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0828 16:52:09.425548 20719 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0828 16:52:09.425583 20719 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0828 16:52:09.425621 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0828 16:52:09.425708 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube533332411 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0828 16:52:09.439110 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0828 16:52:09.439141 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0828 16:52:09.439262 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3612433538 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0828 16:52:09.452424 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:09.452833 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.452877 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.459542 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.459586 20719 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:09.459602 20719 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0828 16:52:09.459609 20719 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:09.459649 20719 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:09.463986 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:09.464047 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:09.475775 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0828 16:52:09.475814 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0828 16:52:09.475948 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3213713111 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0828 16:52:09.483530 20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0828 16:52:09.483562 20719 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0828 16:52:09.483691 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4257002272 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0828 16:52:09.492568 20719 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0828 16:52:09.492600 20719 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0828 16:52:09.492724 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3073608350 /etc/kubernetes/addons/ig-role.yaml
I0828 16:52:09.507004 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0828 16:52:09.507071 20719 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0828 16:52:09.507210 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2465777229 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0828 16:52:09.513210 20719 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0828 16:52:09.513364 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3501012253 /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:09.516146 20719 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0828 16:52:09.516177 20719 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0828 16:52:09.516301 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3461712430 /etc/kubernetes/addons/ig-rolebinding.yaml
I0828 16:52:09.517690 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0828 16:52:09.520352 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0828 16:52:09.520381 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0828 16:52:09.520490 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217431899 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0828 16:52:09.533071 20719 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0828 16:52:09.533099 20719 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0828 16:52:09.533212 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1874314693 /etc/kubernetes/addons/metrics-server-service.yaml
I0828 16:52:09.537660 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:09.537690 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:09.543714 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:09.552281 20719 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0828 16:52:09.552320 20719 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0828 16:52:09.552447 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube704300080 /etc/kubernetes/addons/ig-clusterrole.yaml
I0828 16:52:09.552584 20719 out.go:177] - Using image docker.io/busybox:stable
I0828 16:52:09.554395 20719 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0828 16:52:09.555581 20719 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0828 16:52:09.555606 20719 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0828 16:52:09.555738 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1381229224 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0828 16:52:09.555783 20719 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0828 16:52:09.555805 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0828 16:52:09.555918 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4280683204 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0828 16:52:09.575978 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0828 16:52:09.606465 20719 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:09.606499 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0828 16:52:09.606619 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2413234350 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:09.606702 20719 exec_runner.go:51] Run: sudo systemctl start kubelet
I0828 16:52:09.628692 20719 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0828 16:52:09.628729 20719 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0828 16:52:09.628865 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube236456223 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0828 16:52:09.645282 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:09.651916 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0828 16:52:09.651947 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0828 16:52:09.652066 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805233403 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0828 16:52:09.659638 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0828 16:52:09.659813 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0828 16:52:09.680876 20719 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0828 16:52:09.680914 20719 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0828 16:52:09.682272 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube342720222 /etc/kubernetes/addons/ig-crd.yaml
I0828 16:52:09.712408 20719 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0828 16:52:09.716703 20719 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0828 16:52:09.716732 20719 node_ready.go:38] duration metric: took 4.292467ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0828 16:52:09.716745 20719 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0828 16:52:09.726369 20719 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace to be "Ready" ...
I0828 16:52:09.738070 20719 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0828 16:52:09.738105 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0828 16:52:09.738224 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2250346700 /etc/kubernetes/addons/ig-daemonset.yaml
I0828 16:52:09.742704 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0828 16:52:09.742730 20719 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0828 16:52:09.742846 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233661258 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0828 16:52:09.766247 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0828 16:52:09.766286 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0828 16:52:09.766547 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1346394353 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0828 16:52:09.782972 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0828 16:52:09.783005 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0828 16:52:09.783133 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2498888286 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0828 16:52:09.827335 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0828 16:52:09.887471 20719 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0828 16:52:09.887508 20719 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0828 16:52:09.887637 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1193548962 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0828 16:52:09.907610 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0828 16:52:10.058918 20719 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0828 16:52:10.296332 20719 addons.go:475] Verifying addon registry=true in "minikube"
I0828 16:52:10.310014 20719 out.go:177] * Verifying registry addon...
I0828 16:52:10.312660 20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0828 16:52:10.316102 20719 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0828 16:52:10.316121 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:10.482217 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.056393969s)
I0828 16:52:10.577420 20719 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0828 16:52:10.588055 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.167370011s)
I0828 16:52:10.590673 20719 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0828 16:52:10.643668 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.125935758s)
I0828 16:52:10.834043 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:10.906535 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.330508284s)
I0828 16:52:10.906572 20719 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0828 16:52:10.964691 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.13730518s)
I0828 16:52:11.116699 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.457018181s)
I0828 16:52:11.319319 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:11.416752 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.771404749s)
W0828 16:52:11.416796 20719 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0828 16:52:11.416824 20719 retry.go:31] will retry after 203.233605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0828 16:52:11.621057 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0828 16:52:11.733790 20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:11.816393 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:12.320028 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:12.391749 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.017333384s)
I0828 16:52:12.592540 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.684878592s)
I0828 16:52:12.592578 20719 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0828 16:52:12.598971 20719 out.go:177] * Verifying csi-hostpath-driver addon...
I0828 16:52:12.602059 20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0828 16:52:12.606135 20719 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0828 16:52:12.606160 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:12.818824 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:13.106438 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:13.316732 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:13.607323 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:13.817106 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:14.106393 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:14.231423 20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:14.316214 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:14.416509 20719 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.795372342s)
I0828 16:52:14.607063 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:14.816252 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:15.107721 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:15.316647 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:15.606764 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:15.816503 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:16.107789 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:16.232593 20719 pod_ready.go:103] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"False"
I0828 16:52:16.316924 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:16.326272 20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0828 16:52:16.326425 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1705106344 /var/lib/minikube/google_application_credentials.json
I0828 16:52:16.338275 20719 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0828 16:52:16.338419 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3813997986 /var/lib/minikube/google_cloud_project
I0828 16:52:16.349902 20719 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0828 16:52:16.349959 20719 host.go:66] Checking if "minikube" exists ...
I0828 16:52:16.350443 20719 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0828 16:52:16.350462 20719 api_server.go:166] Checking apiserver status ...
I0828 16:52:16.350496 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:16.366772 20719 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/22041/cgroup
I0828 16:52:16.376050 20719 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79"
I0828 16:52:16.376100 20719 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod121b7d8d8a687dd5fb906898e9cbd082/1bea33e72497f53515164ca8b9b1790eacdfbd12e2ed9176b1e56371f3cfec79/freezer.state
I0828 16:52:16.388279 20719 api_server.go:204] freezer state: "THAWED"
I0828 16:52:16.388304 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:16.392800 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:16.392858 20719 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0828 16:52:16.395607 20719 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0828 16:52:16.397140 20719 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0828 16:52:16.398393 20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0828 16:52:16.398439 20719 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0828 16:52:16.398575 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2661921807 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0828 16:52:16.410235 20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0828 16:52:16.410265 20719 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0828 16:52:16.410395 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3492112993 /etc/kubernetes/addons/gcp-auth-service.yaml
I0828 16:52:16.420861 20719 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0828 16:52:16.420891 20719 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0828 16:52:16.421004 20719 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1888726884 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0828 16:52:16.430238 20719 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0828 16:52:16.606902 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:16.817307 20719 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0828 16:52:16.818806 20719 out.go:177] * Verifying gcp-auth addon...
I0828 16:52:16.819469 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:16.820964 20719 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0828 16:52:16.947834 20719 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0828 16:52:17.155408 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:17.233047 20719 pod_ready.go:93] pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:17.233069 20719 pod_ready.go:82] duration metric: took 7.506622225s for pod "coredns-6f6b679f8f-6tkq4" in "kube-system" namespace to be "Ready" ...
I0828 16:52:17.233078 20719 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace to be "Ready" ...
I0828 16:52:17.317170 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:17.668726 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:17.817052 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:18.107087 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:18.235604 20719 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-v9dmm" not found
I0828 16:52:18.235631 20719 pod_ready.go:82] duration metric: took 1.002545512s for pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace to be "Ready" ...
E0828 16:52:18.235644 20719 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-v9dmm" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-v9dmm" not found
I0828 16:52:18.235653 20719 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.240520 20719 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:18.240542 20719 pod_ready.go:82] duration metric: took 4.880246ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.240555 20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.244728 20719 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:18.244745 20719 pod_ready.go:82] duration metric: took 4.182863ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.244755 20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.248907 20719 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:18.248926 20719 pod_ready.go:82] duration metric: took 4.163274ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.248938 20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gpxwg" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.252742 20719 pod_ready.go:93] pod "kube-proxy-gpxwg" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:18.252758 20719 pod_ready.go:82] duration metric: took 3.813427ms for pod "kube-proxy-gpxwg" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.252768 20719 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.316862 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:18.606950 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:18.630508 20719 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0828 16:52:18.630527 20719 pod_ready.go:82] duration metric: took 377.75139ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0828 16:52:18.630535 20719 pod_ready.go:39] duration metric: took 8.913776771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0828 16:52:18.630551 20719 api_server.go:52] waiting for apiserver process to appear ...
I0828 16:52:18.630602 20719 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0828 16:52:18.647546 20719 api_server.go:72] duration metric: took 9.454981109s to wait for apiserver process to appear ...
I0828 16:52:18.647567 20719 api_server.go:88] waiting for apiserver healthz status ...
I0828 16:52:18.647581 20719 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0828 16:52:18.650853 20719 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0828 16:52:18.651658 20719 api_server.go:141] control plane version: v1.31.0
I0828 16:52:18.651680 20719 api_server.go:131] duration metric: took 4.106926ms to wait for apiserver health ...
I0828 16:52:18.651689 20719 system_pods.go:43] waiting for kube-system pods to appear ...
I0828 16:52:18.816251 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:18.836125 20719 system_pods.go:59] 17 kube-system pods found
I0828 16:52:18.836170 20719 system_pods.go:61] "coredns-6f6b679f8f-6tkq4" [9f4822de-7704-4673-ae47-3d95f1d6b20d] Running
I0828 16:52:18.836183 20719 system_pods.go:61] "csi-hostpath-attacher-0" [b57205bd-bf4a-4031-acf5-a3454e49b197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0828 16:52:18.836194 20719 system_pods.go:61] "csi-hostpath-resizer-0" [5f81070d-2097-4b29-847b-92924f3565c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0828 16:52:18.836207 20719 system_pods.go:61] "csi-hostpathplugin-lq5tq" [fbda2aa4-f9ea-486f-bc10-50dfed11eece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0828 16:52:18.836218 20719 system_pods.go:61] "etcd-ubuntu-20-agent-2" [470651cd-1ec3-4c7d-b39e-468343f4dffa] Running
I0828 16:52:18.836227 20719 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [81f66e5e-231f-4c7d-88da-f1fa1de4f183] Running
I0828 16:52:18.836237 20719 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [dfdedc5e-3693-4bac-8eab-0b735fa89664] Running
I0828 16:52:18.836244 20719 system_pods.go:61] "kube-proxy-gpxwg" [64d274a0-dfd5-4a29-98b2-3ef060017ab5] Running
I0828 16:52:18.836250 20719 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [9093644a-c893-4630-a115-6b986c987a97] Running
I0828 16:52:18.836265 20719 system_pods.go:61] "metrics-server-84c5f94fbc-2ngms" [8c24b175-53cd-473a-99e4-0391de9f4873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0828 16:52:18.836290 20719 system_pods.go:61] "nvidia-device-plugin-daemonset-vb2jx" [c93e7df8-d7eb-4c47-9006-ba23929faefa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I0828 16:52:18.836305 20719 system_pods.go:61] "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0828 16:52:18.836316 20719 system_pods.go:61] "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0828 16:52:18.836330 20719 system_pods.go:61] "snapshot-controller-56fcc65765-js985" [f9e3a59b-f20f-4453-9425-e8b795573e49] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:52:18.836345 20719 system_pods.go:61] "snapshot-controller-56fcc65765-p4gqz" [8d57f713-7241-406b-a9c5-21a6542c01ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:52:18.836354 20719 system_pods.go:61] "storage-provisioner" [01bf7c20-346b-44e8-8daf-8e2532f55d91] Running
I0828 16:52:18.836368 20719 system_pods.go:61] "tiller-deploy-b48cc5f79-cf7dp" [82d95935-34cf-42d7-9210-1f3bd5b0ff29] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0828 16:52:18.836382 20719 system_pods.go:74] duration metric: took 184.685113ms to wait for pod list to return data ...
I0828 16:52:18.836398 20719 default_sa.go:34] waiting for default service account to be created ...
I0828 16:52:19.030365 20719 default_sa.go:45] found service account: "default"
I0828 16:52:19.030387 20719 default_sa.go:55] duration metric: took 193.979614ms for default service account to be created ...
I0828 16:52:19.030396 20719 system_pods.go:116] waiting for k8s-apps to be running ...
I0828 16:52:19.106206 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:19.234210 20719 system_pods.go:86] 17 kube-system pods found
I0828 16:52:19.234237 20719 system_pods.go:89] "coredns-6f6b679f8f-6tkq4" [9f4822de-7704-4673-ae47-3d95f1d6b20d] Running
I0828 16:52:19.234244 20719 system_pods.go:89] "csi-hostpath-attacher-0" [b57205bd-bf4a-4031-acf5-a3454e49b197] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0828 16:52:19.234251 20719 system_pods.go:89] "csi-hostpath-resizer-0" [5f81070d-2097-4b29-847b-92924f3565c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0828 16:52:19.234258 20719 system_pods.go:89] "csi-hostpathplugin-lq5tq" [fbda2aa4-f9ea-486f-bc10-50dfed11eece] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0828 16:52:19.234262 20719 system_pods.go:89] "etcd-ubuntu-20-agent-2" [470651cd-1ec3-4c7d-b39e-468343f4dffa] Running
I0828 16:52:19.234266 20719 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [81f66e5e-231f-4c7d-88da-f1fa1de4f183] Running
I0828 16:52:19.234270 20719 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [dfdedc5e-3693-4bac-8eab-0b735fa89664] Running
I0828 16:52:19.234274 20719 system_pods.go:89] "kube-proxy-gpxwg" [64d274a0-dfd5-4a29-98b2-3ef060017ab5] Running
I0828 16:52:19.234277 20719 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [9093644a-c893-4630-a115-6b986c987a97] Running
I0828 16:52:19.234284 20719 system_pods.go:89] "metrics-server-84c5f94fbc-2ngms" [8c24b175-53cd-473a-99e4-0391de9f4873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0828 16:52:19.234289 20719 system_pods.go:89] "nvidia-device-plugin-daemonset-vb2jx" [c93e7df8-d7eb-4c47-9006-ba23929faefa] Running
I0828 16:52:19.234294 20719 system_pods.go:89] "registry-6fb4cdfc84-rtp2b" [2c727f7b-0cf9-4843-a060-78e13883fe27] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0828 16:52:19.234302 20719 system_pods.go:89] "registry-proxy-v7gq6" [4d2021b8-55cb-4260-88fc-edac5c2173d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0828 16:52:19.234307 20719 system_pods.go:89] "snapshot-controller-56fcc65765-js985" [f9e3a59b-f20f-4453-9425-e8b795573e49] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:52:19.234317 20719 system_pods.go:89] "snapshot-controller-56fcc65765-p4gqz" [8d57f713-7241-406b-a9c5-21a6542c01ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0828 16:52:19.234320 20719 system_pods.go:89] "storage-provisioner" [01bf7c20-346b-44e8-8daf-8e2532f55d91] Running
I0828 16:52:19.234325 20719 system_pods.go:89] "tiller-deploy-b48cc5f79-cf7dp" [82d95935-34cf-42d7-9210-1f3bd5b0ff29] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0828 16:52:19.234333 20719 system_pods.go:126] duration metric: took 203.930569ms to wait for k8s-apps to be running ...
I0828 16:52:19.234341 20719 system_svc.go:44] waiting for kubelet service to be running ....
I0828 16:52:19.234381 20719 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0828 16:52:19.247940 20719 system_svc.go:56] duration metric: took 13.592661ms WaitForService to wait for kubelet
I0828 16:52:19.247967 20719 kubeadm.go:582] duration metric: took 10.055404456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0828 16:52:19.247984 20719 node_conditions.go:102] verifying NodePressure condition ...
I0828 16:52:19.315859 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:19.429973 20719 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0828 16:52:19.429995 20719 node_conditions.go:123] node cpu capacity is 8
I0828 16:52:19.430005 20719 node_conditions.go:105] duration metric: took 182.017368ms to run NodePressure ...
I0828 16:52:19.430019 20719 start.go:241] waiting for startup goroutines ...
I0828 16:52:19.606909 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:19.816101 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:20.107204 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:20.316241 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:20.605915 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:20.815784 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:21.105886 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:21.316240 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:21.606812 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:21.817046 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:22.106997 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:22.315957 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:22.606595 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:22.816948 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:23.107112 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:23.316320 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:23.606923 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:23.839011 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:24.107068 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:24.315895 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:24.606478 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:24.816863 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:25.106694 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:25.317022 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:25.607322 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:25.816275 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:26.107479 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:26.316704 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:26.606430 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:26.816111 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:27.106376 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:27.316544 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:27.605945 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:27.816976 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0828 16:52:28.106387 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:28.316571 20719 kapi.go:107] duration metric: took 18.003905693s to wait for kubernetes.io/minikube-addons=registry ...
I0828 16:52:28.605942 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:29.109869 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:29.606158 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:30.106762 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:30.607180 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:31.107067 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:31.607405 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:32.107396 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:32.606033 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:33.106887 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:33.606840 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:34.106544 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:34.606970 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:35.106567 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:35.606551 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:36.107618 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:36.606633 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:37.106814 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:37.606647 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:38.106703 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:38.606514 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:39.106989 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:39.606775 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:40.106082 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:40.606672 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:41.107925 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:41.605986 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:42.106210 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:42.606882 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:43.106834 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:43.606429 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:44.107435 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:44.606289 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:45.107600 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:45.607067 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:46.106946 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:46.606805 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:47.107330 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:47.606388 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:48.107327 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:48.606675 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0828 16:52:49.105959 20719 kapi.go:107] duration metric: took 36.503902657s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0828 16:52:58.323952 20719 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0828 16:52:58.323971 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:52:58.824027 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:52:59.323832 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:52:59.823995 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:00.324470 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:00.824450 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:01.324105 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:01.824094 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:02.324412 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:02.824235 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:03.324359 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:03.824028 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:04.324145 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:04.823859 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:05.324095 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:05.824081 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:06.324034 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:06.823666 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:07.324687 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:07.824527 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:08.324566 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:08.824489 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:09.324636 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:09.824449 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:10.324560 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:10.824549 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:11.324599 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:11.824062 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:12.324220 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:12.824181 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:13.323786 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:13.825107 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:14.325203 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:14.823768 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:15.324404 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:15.824516 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:16.324475 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:16.824216 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:17.323960 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:17.823685 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:18.324553 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:18.824717 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:19.324528 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:19.824532 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:20.325050 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:20.824290 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:21.327090 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:21.824086 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:22.324189 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:22.824127 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:23.324243 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:23.823844 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:24.324027 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:24.823978 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:25.323785 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:25.825049 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:26.324451 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:26.824050 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:27.324126 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:27.824041 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:28.324047 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:28.823826 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:29.324527 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:29.824388 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:30.324894 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:30.824763 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:31.324674 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:31.823963 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:32.324628 20719 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0828 16:53:32.824509 20719 kapi.go:107] duration metric: took 1m16.003545291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0828 16:53:32.826258 20719 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0828 16:53:32.827462 20719 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0828 16:53:32.828682 20719 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0828 16:53:32.830032 20719 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, helm-tiller, yakd, storage-provisioner, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0828 16:53:32.831271 20719 addons.go:510] duration metric: took 1m23.645050177s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass helm-tiller yakd storage-provisioner metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0828 16:53:32.831310 20719 start.go:246] waiting for cluster config update ...
I0828 16:53:32.831327 20719 start.go:255] writing updated cluster config ...
I0828 16:53:32.831554 20719 exec_runner.go:51] Run: rm -f paused
I0828 16:53:32.873811 20719 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0828 16:53:32.875305 20719 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Thu 2024-07-18 18:21:33 UTC, end at Wed 2024-08-28 17:03:24 UTC. --
Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.320965688Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.320968249Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.322629490Z" level=error msg="Error running exec 6a350433c0c7cefb159817b5b9124a95b640023503c8319c8fbaa84b59b70133 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Aug 28 16:55:43 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:55:43.407538652Z" level=info msg="ignoring event" container=fd4eb1bc477fd0aafeb47027587f6da6afa534d2fcc1f09104d916b555e0e5a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 16:57:10 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:57:10.075877170Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Aug 28 16:57:10 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:57:10.078089823Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Aug 28 16:58:33 ubuntu-20-agent-2 cri-dockerd[21264]: time="2024-08-28T16:58:33Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.269652382Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.269741327Z" level=error msg="stream copy error: reading from a closed fifo"
Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.271689104Z" level=error msg="Error running exec 5b960aed91d1276e11e71a3e0b45951a3efa6f064c82632e5fa0a8406ac7a74d in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Aug 28 16:58:34 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T16:58:34.382196877Z" level=info msg="ignoring event" container=24aae337936daaece615cd4278c47021b1b0aada6007c1792cc8cfdb3ea21a58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:00:01 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:00:01.064854804Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Aug 28 17:00:01 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:00:01.066792558Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Aug 28 17:02:24 ubuntu-20-agent-2 cri-dockerd[21264]: time="2024-08-28T17:02:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2226ebe473ce84ab6edf16972b9c08fbea4c9f7729bb7c565f41f52fc0a6d255/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Aug 28 17:02:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:24.240603178Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:02:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:24.242845844Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:02:39 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:39.069251280Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:02:39 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:02:39.071331092Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:03:06 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:06.074684140Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:03:06 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:06.076971828Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Aug 28 17:03:23 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:23.697195289Z" level=info msg="ignoring event" container=2226ebe473ce84ab6edf16972b9c08fbea4c9f7729bb7c565f41f52fc0a6d255 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:03:23 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:23.948157516Z" level=info msg="ignoring event" container=2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.013245450Z" level=info msg="ignoring event" container=d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.100011136Z" level=info msg="ignoring event" container=f62c5f3fdf5ad00837d130e8d5d151d85ecb004fa362aa01e3acb023ed6672b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 17:03:24 ubuntu-20-agent-2 dockerd[20936]: time="2024-08-28T17:03:24.173998058Z" level=info msg="ignoring event" container=d3a683036ac72181a27d676661a4158610a9662069da8be88825de0c5f7e411f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
24aae337936da ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc 4 minutes ago Exited gadget 6 0084ee5c49f65 gadget-jptxz
4287458110d5c gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 7e3455bfe3def gcp-auth-89d5ffd79-qpsmg
c512fa27a2390 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
9a9b5a12b4d3a registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
b111855d244e4 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
56d6e6a2aecd0 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
50446845d6f76 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
4f2c52d1e58d0 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 df9edbafd9bbf csi-hostpath-resizer-0
8f01a953eb644 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 44187b93bc6d5 csi-hostpathplugin-lq5tq
5cd606d0a7974 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 7784e5241d8c9 csi-hostpath-attacher-0
c2d07d841d7b7 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 2b8c54076cd38 snapshot-controller-56fcc65765-p4gqz
76499a40cba32 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 4501b8028aee9 snapshot-controller-56fcc65765-js985
11338d4617c58 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 fdc81c485b9f8 local-path-provisioner-86d989889c-fdj7z
88b04840669d9 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 205b1690a8175 metrics-server-84c5f94fbc-2ngms
1a3d42201e4c7 ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 10 minutes ago Running tiller 0 d86718d960177 tiller-deploy-b48cc5f79-cf7dp
d8b96a3bb285a gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 10 minutes ago Exited registry-proxy 0 d3a683036ac72 registry-proxy-v7gq6
68e73845e3371 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 11 minutes ago Running yakd 0 6aa7df080dad4 yakd-dashboard-67d98fc6b-gsjcq
2199e6f58c84a registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d 11 minutes ago Exited registry 0 f62c5f3fdf5ad registry-6fb4cdfc84-rtp2b
9c11233fd37e8 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 e57540fc4c013 nvidia-device-plugin-daemonset-vb2jx
42dac78b42eb0 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 09a5b2321c9f6 cloud-spanner-emulator-769b77f747-tggfz
ca48c1eeb323c 6e38f40d628db 11 minutes ago Running storage-provisioner 0 b26e9b74b2fd2 storage-provisioner
e2e27508ec9ba cbb01a7bd410d 11 minutes ago Running coredns 0 ccc5b2cbd21f8 coredns-6f6b679f8f-6tkq4
480f07a6cbb75 ad83b2ca7b09e 11 minutes ago Running kube-proxy 0 d779fedd88a8e kube-proxy-gpxwg
13f8999b5a71b 1766f54c897f0 11 minutes ago Running kube-scheduler 0 2e5b7f02db33f kube-scheduler-ubuntu-20-agent-2
1bea33e72497f 604f5db92eaa8 11 minutes ago Running kube-apiserver 0 143308d734ff8 kube-apiserver-ubuntu-20-agent-2
ca46c5255f106 045733566833c 11 minutes ago Running kube-controller-manager 0 e759839799e97 kube-controller-manager-ubuntu-20-agent-2
3a3d26b21f836 2e96e5913fc06 11 minutes ago Running etcd 0 e5c379ea2146c etcd-ubuntu-20-agent-2
==> coredns [e2e27508ec9b] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:55649 - 16064 "HINFO IN 4598352363640909345.3241484874310641819. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.17111116s
[INFO] 10.244.0.24:47649 - 7325 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314958s
[INFO] 10.244.0.24:34012 - 52853 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040694s
[INFO] 10.244.0.24:59787 - 17040 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009985s
[INFO] 10.244.0.24:59089 - 56641 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089256s
[INFO] 10.244.0.24:40636 - 59812 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101528s
[INFO] 10.244.0.24:52084 - 4582 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079923s
[INFO] 10.244.0.24:35527 - 24090 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003561917s
[INFO] 10.244.0.24:44024 - 37501 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005092398s
[INFO] 10.244.0.24:45901 - 58319 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003584164s
[INFO] 10.244.0.24:59493 - 33044 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003848287s
[INFO] 10.244.0.24:41584 - 59639 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002405873s
[INFO] 10.244.0.24:41422 - 38207 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00249938s
[INFO] 10.244.0.24:41013 - 24388 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001218287s
[INFO] 10.244.0.24:40080 - 48316 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001384498s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_28T16_52_04_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 28 Aug 2024 16:52:01 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Wed, 28 Aug 2024 17:03:16 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 28 Aug 2024 16:59:11 +0000 Wed, 28 Aug 2024 16:52:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 28 Aug 2024 16:59:11 +0000 Wed, 28 Aug 2024 16:52:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 28 Aug 2024 16:59:11 +0000 Wed, 28 Aug 2024 16:52:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 28 Aug 2024 16:59:11 +0000 Wed, 28 Aug 2024 16:52:02 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: d1649260-66e1-4b69-b671-0d72f89b8086
Kernel Version: 5.15.0-1067-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.0
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-769b77f747-tggfz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-jptxz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-qpsmg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-6f6b679f8f-6tkq4 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-lq5tq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-gpxwg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-2ngms 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-vb2jx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-js985 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-p4gqz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system tiller-deploy-b48cc5f79-cf7dp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-fdj7z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-gsjcq 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 14 c6 f4 e7 81 08 06
[ +0.022167] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 ad ce 12 58 29 08 06
[ +2.606750] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 07 1c 25 dd 91 08 06
[ +1.645026] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 db 96 ce df dc 08 06
[ +1.959788] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ce 5f 00 f0 30 25 08 06
[ +4.473417] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 87 90 81 92 b7 08 06
[ +0.184861] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 8f 9f a6 e2 ef 08 06
[ +0.083137] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 a9 59 00 f1 9b 08 06
[ +0.134707] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff fe f5 a4 1e 8d f3 08 06
[Aug28 16:53] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 e1 f9 18 7f 0c 08 06
[ +0.036390] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d 6e 9c 55 c1 08 06
[ +11.169244] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 c1 af 06 bf 4b 08 06
[ +0.000514] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e ba 09 dd df 68 08 06
==> etcd [3a3d26b21f83] <==
{"level":"info","ts":"2024-08-28T16:52:00.029310Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-28T16:52:00.316395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
{"level":"info","ts":"2024-08-28T16:52:00.316445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
{"level":"info","ts":"2024-08-28T16:52:00.316472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-08-28T16:52:00.316488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-08-28T16:52:00.316497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-08-28T16:52:00.316509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-08-28T16:52:00.316522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-08-28T16:52:00.317345Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:00.317812Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-28T16:52:00.317838Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-28T16:52:00.317850Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-28T16:52:00.318123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-28T16:52:00.318145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-28T16:52:00.318174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:00.318243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:00.318269Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-28T16:52:00.318912Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-28T16:52:00.319066Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-28T16:52:00.319816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-08-28T16:52:00.320210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-28T16:52:17.151151Z","caller":"traceutil/trace.go:171","msg":"trace[415500967] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"112.476197ms","start":"2024-08-28T16:52:17.038655Z","end":"2024-08-28T16:52:17.151132Z","steps":["trace[415500967] 'process raft request' (duration: 112.349262ms)"],"step_count":1}
{"level":"info","ts":"2024-08-28T17:02:00.462417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1717}
{"level":"info","ts":"2024-08-28T17:02:00.485412Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1717,"took":"22.505734ms","hash":1135449533,"current-db-size-bytes":8056832,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4354048,"current-db-size-in-use":"4.4 MB"}
{"level":"info","ts":"2024-08-28T17:02:00.485454Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1135449533,"revision":1717,"compact-revision":-1}
==> gcp-auth [4287458110d5] <==
2024/08/28 16:53:32 GCP Auth Webhook started!
2024/08/28 16:53:49 Ready to marshal response ...
2024/08/28 16:53:49 Ready to write response ...
2024/08/28 16:53:49 Ready to marshal response ...
2024/08/28 16:53:49 Ready to write response ...
2024/08/28 16:54:11 Ready to marshal response ...
2024/08/28 16:54:11 Ready to write response ...
2024/08/28 16:54:11 Ready to marshal response ...
2024/08/28 16:54:11 Ready to write response ...
2024/08/28 16:54:11 Ready to marshal response ...
2024/08/28 16:54:11 Ready to write response ...
2024/08/28 17:02:23 Ready to marshal response ...
2024/08/28 17:02:23 Ready to write response ...
==> kernel <==
17:03:24 up 45 min, 0 users, load average: 0.40, 0.39, 0.34
Linux ubuntu-20-agent-2 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [1bea33e72497] <==
W0828 16:52:51.721286 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.161.172:443: connect: connection refused
W0828 16:52:57.826527 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
E0828 16:52:57.826559 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
W0828 16:53:19.838645 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
E0828 16:53:19.838685 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
W0828 16:53:19.844914 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.8.25:443: connect: connection refused
E0828 16:53:19.844956 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.8.25:443: connect: connection refused" logger="UnhandledError"
I0828 16:53:49.130346 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0828 16:53:49.147067 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0828 16:54:01.494193 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0828 16:54:01.505728 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0828 16:54:01.604947 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0828 16:54:01.628897 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0828 16:54:01.628983 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0828 16:54:01.634019 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0828 16:54:01.785177 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0828 16:54:01.810938 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0828 16:54:01.858342 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0828 16:54:02.644689 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0828 16:54:02.662809 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0828 16:54:02.772294 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0828 16:54:02.803158 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0828 16:54:02.803156 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0828 16:54:02.858777 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0828 16:54:03.016745 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [ca46c5255f10] <==
W0828 17:02:13.328707 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:13.328758 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:15.963754 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:15.963795 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:16.229333 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:16.229373 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:18.979569 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:18.979611 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:19.287768 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:19.287809 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:47.867355 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:47.867396 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:49.304487 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:49.304526 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:54.937482 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:54.937530 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:55.088603 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:55.088648 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:02:55.342205 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:02:55.342246 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:03:00.746363 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:03:00.746405 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0828 17:03:12.393065 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0828 17:03:12.393106 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0828 17:03:23.914485 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="8.259µs"
==> kube-proxy [480f07a6cbb7] <==
I0828 16:52:09.781309 1 server_linux.go:66] "Using iptables proxy"
I0828 16:52:10.060618 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0828 16:52:10.060707 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0828 16:52:10.233590 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0828 16:52:10.233657 1 server_linux.go:169] "Using iptables Proxier"
I0828 16:52:10.237216 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0828 16:52:10.237573 1 server.go:483] "Version info" version="v1.31.0"
I0828 16:52:10.237595 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0828 16:52:10.241197 1 config.go:197] "Starting service config controller"
I0828 16:52:10.241218 1 shared_informer.go:313] Waiting for caches to sync for service config
I0828 16:52:10.241238 1 config.go:104] "Starting endpoint slice config controller"
I0828 16:52:10.241243 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0828 16:52:10.241789 1 config.go:326] "Starting node config controller"
I0828 16:52:10.241886 1 shared_informer.go:313] Waiting for caches to sync for node config
I0828 16:52:10.341279 1 shared_informer.go:320] Caches are synced for service config
I0828 16:52:10.341379 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0828 16:52:10.341964 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [13f8999b5a71] <==
W0828 16:52:01.314154 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0828 16:52:01.314112 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0828 16:52:01.314180 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0828 16:52:01.314170 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0828 16:52:01.314193 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0828 16:52:01.314185 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0828 16:52:01.314219 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0828 16:52:01.314153 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.244775 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0828 16:52:02.244816 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.259148 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0828 16:52:02.259176 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.277435 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0828 16:52:02.277477 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.298769 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0828 16:52:02.298808 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.315643 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0828 16:52:02.315686 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.325958 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0828 16:52:02.325996 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.493659 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0828 16:52:02.493698 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0828 16:52:02.583650 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0828 16:52:02.583688 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0828 16:52:04.610198 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Thu 2024-07-18 18:21:33 UTC, end at Wed 2024-08-28 17:03:24 UTC. --
Aug 28 17:03:11 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:11.926646 22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="119c62ae-02ac-4215-aba8-bea987228018"
Aug 28 17:03:17 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:17.926937 22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="b503f647-1d8d-481b-877b-c5c54812988d"
Aug 28 17:03:22 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:22.924546 22182 scope.go:117] "RemoveContainer" containerID="24aae337936daaece615cd4278c47021b1b0aada6007c1792cc8cfdb3ea21a58"
Aug 28 17:03:22 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:22.924803 22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jptxz_gadget(7a692f95-8588-4031-aad2-f492771c07d4)\"" pod="gadget/gadget-jptxz" podUID="7a692f95-8588-4031-aad2-f492771c07d4"
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785041 22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjdzm\" (UniqueName: \"kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm\") pod \"b503f647-1d8d-481b-877b-c5c54812988d\" (UID: \"b503f647-1d8d-481b-877b-c5c54812988d\") "
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785089 22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds\") pod \"b503f647-1d8d-481b-877b-c5c54812988d\" (UID: \"b503f647-1d8d-481b-877b-c5c54812988d\") "
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.785180 22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b503f647-1d8d-481b-877b-c5c54812988d" (UID: "b503f647-1d8d-481b-877b-c5c54812988d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.786873 22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm" (OuterVolumeSpecName: "kube-api-access-tjdzm") pod "b503f647-1d8d-481b-877b-c5c54812988d" (UID: "b503f647-1d8d-481b-877b-c5c54812988d"). InnerVolumeSpecName "kube-api-access-tjdzm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.885898 22182 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b503f647-1d8d-481b-877b-c5c54812988d-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:23.885934 22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tjdzm\" (UniqueName: \"kubernetes.io/projected/b503f647-1d8d-481b-877b-c5c54812988d-kube-api-access-tjdzm\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Aug 28 17:03:23 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:23.927346 22182 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="119c62ae-02ac-4215-aba8-bea987228018"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.187748 22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqb5n\" (UniqueName: \"kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n\") pod \"2c727f7b-0cf9-4843-a060-78e13883fe27\" (UID: \"2c727f7b-0cf9-4843-a060-78e13883fe27\") "
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.189991 22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n" (OuterVolumeSpecName: "kube-api-access-nqb5n") pod "2c727f7b-0cf9-4843-a060-78e13883fe27" (UID: "2c727f7b-0cf9-4843-a060-78e13883fe27"). InnerVolumeSpecName "kube-api-access-nqb5n". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.289005 22182 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsxx2\" (UniqueName: \"kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2\") pod \"4d2021b8-55cb-4260-88fc-edac5c2173d8\" (UID: \"4d2021b8-55cb-4260-88fc-edac5c2173d8\") "
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.289134 22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nqb5n\" (UniqueName: \"kubernetes.io/projected/2c727f7b-0cf9-4843-a060-78e13883fe27-kube-api-access-nqb5n\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.290937 22182 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2" (OuterVolumeSpecName: "kube-api-access-vsxx2") pod "4d2021b8-55cb-4260-88fc-edac5c2173d8" (UID: "4d2021b8-55cb-4260-88fc-edac5c2173d8"). InnerVolumeSpecName "kube-api-access-vsxx2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.390334 22182 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vsxx2\" (UniqueName: \"kubernetes.io/projected/4d2021b8-55cb-4260-88fc-edac5c2173d8-kube-api-access-vsxx2\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.569820 22182 scope.go:117] "RemoveContainer" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.592162 22182 scope.go:117] "RemoveContainer" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:24.593629 22182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f" containerID="d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.593665 22182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"} err="failed to get container status \"d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f\": rpc error: code = Unknown desc = Error response from daemon: No such container: d8b96a3bb285a6d611679d5a2599883aff51471a9c44048de51b13201eb6896f"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.593687 22182 scope.go:117] "RemoveContainer" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.611164 22182 scope.go:117] "RemoveContainer" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: E0828 17:03:24.611985 22182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b" containerID="2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
Aug 28 17:03:24 ubuntu-20-agent-2 kubelet[22182]: I0828 17:03:24.612020 22182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"} err="failed to get container status \"2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2199e6f58c84a34d8b03f20d48caab9dcd14c9490961da1f4770d77946689d0b"
==> storage-provisioner [ca48c1eeb323] <==
I0828 16:52:11.715889 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0828 16:52:11.730409 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0828 16:52:11.730457 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0828 16:52:11.737380 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0828 16:52:11.737570 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e!
I0828 16:52:11.738391 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0eed10bb-c3c4-4843-a38b-667c5bb26c3e", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e became leader
I0828 16:52:11.837819 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_7fc1c5d6-de31-409a-b964-a4acd7b6947e!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Wed, 28 Aug 2024 16:54:11 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.26
IPs:
IP: 10.244.0.26
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m74n5 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m74n5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m48s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m47s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m47s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m34s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m5s (x21 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.73s)