=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.864936ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00343036s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003303722s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.081207839s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/13 18:33:57 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:37771 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:20 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 13 Sep 24 18:20 UTC | 13 Sep 24 18:22 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | 13 Sep 24 18:22 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 13 Sep 24 18:22 UTC | 13 Sep 24 18:24 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 13 Sep 24 18:24 UTC | 13 Sep 24 18:24 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 13 Sep 24 18:33 UTC | 13 Sep 24 18:33 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/13 18:22:10
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0913 18:22:10.064496 14328 out.go:345] Setting OutFile to fd 1 ...
I0913 18:22:10.064756 14328 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:22:10.064766 14328 out.go:358] Setting ErrFile to fd 2...
I0913 18:22:10.064770 14328 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 18:22:10.064945 14328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19636-3707/.minikube/bin
I0913 18:22:10.065550 14328 out.go:352] Setting JSON to false
I0913 18:22:10.066420 14328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":270,"bootTime":1726251460,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0913 18:22:10.066516 14328 start.go:139] virtualization: kvm guest
I0913 18:22:10.068784 14328 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0913 18:22:10.070166 14328 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19636-3707/.minikube/cache/preloaded-tarball: no such file or directory
I0913 18:22:10.070207 14328 notify.go:220] Checking for updates...
I0913 18:22:10.070254 14328 out.go:177] - MINIKUBE_LOCATION=19636
I0913 18:22:10.071732 14328 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0913 18:22:10.073500 14328 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19636-3707/kubeconfig
I0913 18:22:10.075364 14328 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19636-3707/.minikube
I0913 18:22:10.076802 14328 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0913 18:22:10.078024 14328 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0913 18:22:10.079405 14328 driver.go:394] Setting default libvirt URI to qemu:///system
I0913 18:22:10.089919 14328 out.go:177] * Using the none driver based on user configuration
I0913 18:22:10.091401 14328 start.go:297] selected driver: none
I0913 18:22:10.091419 14328 start.go:901] validating driver "none" against <nil>
I0913 18:22:10.091440 14328 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0913 18:22:10.091492 14328 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0913 18:22:10.091789 14328 out.go:270] ! The 'none' driver does not respect the --memory flag
I0913 18:22:10.092332 14328 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0913 18:22:10.092590 14328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0913 18:22:10.092618 14328 cni.go:84] Creating CNI manager for ""
I0913 18:22:10.092674 14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 18:22:10.092688 14328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0913 18:22:10.092751 14328 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0913 18:22:10.094439 14328 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0913 18:22:10.095977 14328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json ...
I0913 18:22:10.096016 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json: {Name:mkd150c72083440d8af87241650f704d226e0f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:10.096194 14328 start.go:360] acquireMachinesLock for minikube: {Name:mk1177d6c2a3f835d0a2cf4f02b8ba8a9aa96d82 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 18:22:10.096231 14328 start.go:364] duration metric: took 20.076µs to acquireMachinesLock for "minikube"
I0913 18:22:10.096249 14328 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0913 18:22:10.096323 14328 start.go:125] createHost starting for "" (driver="none")
I0913 18:22:10.098023 14328 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0913 18:22:10.099406 14328 exec_runner.go:51] Run: systemctl --version
I0913 18:22:10.102134 14328 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0913 18:22:10.102184 14328 client.go:168] LocalClient.Create starting
I0913 18:22:10.102324 14328 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca.pem
I0913 18:22:10.102371 14328 main.go:141] libmachine: Decoding PEM data...
I0913 18:22:10.102397 14328 main.go:141] libmachine: Parsing certificate...
I0913 18:22:10.102470 14328 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19636-3707/.minikube/certs/cert.pem
I0913 18:22:10.102497 14328 main.go:141] libmachine: Decoding PEM data...
I0913 18:22:10.102517 14328 main.go:141] libmachine: Parsing certificate...
I0913 18:22:10.102955 14328 client.go:171] duration metric: took 763.006µs to LocalClient.Create
I0913 18:22:10.102986 14328 start.go:167] duration metric: took 863.39µs to libmachine.API.Create "minikube"
I0913 18:22:10.102995 14328 start.go:293] postStartSetup for "minikube" (driver="none")
I0913 18:22:10.103045 14328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0913 18:22:10.103102 14328 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0913 18:22:10.113010 14328 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0913 18:22:10.113032 14328 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0913 18:22:10.113040 14328 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0913 18:22:10.115177 14328 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0913 18:22:10.116477 14328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3707/.minikube/addons for local assets ...
I0913 18:22:10.116538 14328 filesync.go:126] Scanning /home/jenkins/minikube-integration/19636-3707/.minikube/files for local assets ...
I0913 18:22:10.116560 14328 start.go:296] duration metric: took 13.555649ms for postStartSetup
I0913 18:22:10.117178 14328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/config.json ...
I0913 18:22:10.117317 14328 start.go:128] duration metric: took 20.983708ms to createHost
I0913 18:22:10.117330 14328 start.go:83] releasing machines lock for "minikube", held for 21.088992ms
I0913 18:22:10.117634 14328 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0913 18:22:10.117763 14328 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0913 18:22:10.119676 14328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0913 18:22:10.119850 14328 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0913 18:22:10.130480 14328 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0913 18:22:10.130507 14328 start.go:495] detecting cgroup driver to use...
I0913 18:22:10.130536 14328 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0913 18:22:10.130625 14328 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0913 18:22:10.148681 14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0913 18:22:10.157627 14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0913 18:22:10.166827 14328 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0913 18:22:10.166886 14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0913 18:22:10.176218 14328 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0913 18:22:10.186633 14328 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0913 18:22:10.195860 14328 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0913 18:22:10.205493 14328 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0913 18:22:10.213826 14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0913 18:22:10.225497 14328 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0913 18:22:10.236179 14328 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0913 18:22:10.245314 14328 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0913 18:22:10.252802 14328 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0913 18:22:10.260772 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:10.474369 14328 exec_runner.go:51] Run: sudo systemctl restart containerd
I0913 18:22:10.599258 14328 start.go:495] detecting cgroup driver to use...
I0913 18:22:10.599326 14328 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0913 18:22:10.599433 14328 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0913 18:22:10.620954 14328 exec_runner.go:51] Run: which cri-dockerd
I0913 18:22:10.622076 14328 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0913 18:22:10.631656 14328 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0913 18:22:10.631675 14328 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 18:22:10.631711 14328 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 18:22:10.641079 14328 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0913 18:22:10.641267 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1370012668 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 18:22:10.650147 14328 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0913 18:22:10.876081 14328 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0913 18:22:11.091120 14328 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0913 18:22:11.091286 14328 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0913 18:22:11.091301 14328 exec_runner.go:203] rm: /etc/docker/daemon.json
I0913 18:22:11.091347 14328 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0913 18:22:11.100525 14328 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0913 18:22:11.100695 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2269362476 /etc/docker/daemon.json
I0913 18:22:11.109304 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:11.336958 14328 exec_runner.go:51] Run: sudo systemctl restart docker
I0913 18:22:11.748133 14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0913 18:22:11.759660 14328 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0913 18:22:11.776010 14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0913 18:22:11.787458 14328 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0913 18:22:12.004519 14328 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0913 18:22:12.232992 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:12.446713 14328 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0913 18:22:12.460609 14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0913 18:22:12.471894 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:12.684169 14328 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0913 18:22:12.753102 14328 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0913 18:22:12.753179 14328 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0913 18:22:12.754553 14328 start.go:563] Will wait 60s for crictl version
I0913 18:22:12.754601 14328 exec_runner.go:51] Run: which crictl
I0913 18:22:12.755327 14328 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0913 18:22:12.788236 14328 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0913 18:22:12.788301 14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 18:22:12.809369 14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 18:22:12.833558 14328 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0913 18:22:12.833635 14328 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0913 18:22:12.836443 14328 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0913 18:22:12.837616 14328 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0913 18:22:12.837724 14328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0913 18:22:12.837735 14328 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.1 docker true true} ...
I0913 18:22:12.837808 14328 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0913 18:22:12.837850 14328 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0913 18:22:12.886148 14328 cni.go:84] Creating CNI manager for ""
I0913 18:22:12.886173 14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 18:22:12.886185 14328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0913 18:22:12.886210 14328 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0913 18:22:12.886379 14328 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-9"
kubeletExtraArgs:
node-ip: 10.154.0.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0913 18:22:12.886446 14328 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0913 18:22:12.895363 14328 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0913 18:22:12.895421 14328 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0913 18:22:12.904586 14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0913 18:22:12.904586 14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0913 18:22:12.904631 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0913 18:22:12.904618 14328 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0913 18:22:12.904673 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0913 18:22:12.904710 14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0913 18:22:12.917826 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0913 18:22:12.955087 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube26677164 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0913 18:22:12.968915 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207779581 /var/lib/minikube/binaries/v1.31.1/kubectl
I0913 18:22:12.998652 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3904692067 /var/lib/minikube/binaries/v1.31.1/kubelet
I0913 18:22:13.063653 14328 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0913 18:22:13.072336 14328 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0913 18:22:13.072357 14328 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 18:22:13.072396 14328 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 18:22:13.080405 14328 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I0913 18:22:13.080557 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1860911824 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 18:22:13.088934 14328 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0913 18:22:13.088953 14328 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0913 18:22:13.089000 14328 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0913 18:22:13.099087 14328 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0913 18:22:13.099293 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2025006050 /lib/systemd/system/kubelet.service
I0913 18:22:13.108153 14328 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
I0913 18:22:13.108311 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube904266075 /var/tmp/minikube/kubeadm.yaml.new
I0913 18:22:13.117067 14328 exec_runner.go:51] Run: grep 10.154.0.4 control-plane.minikube.internal$ /etc/hosts
I0913 18:22:13.118458 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:13.356904 14328 exec_runner.go:51] Run: sudo systemctl start kubelet
I0913 18:22:13.372081 14328 certs.go:68] Setting up /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube for IP: 10.154.0.4
I0913 18:22:13.372104 14328 certs.go:194] generating shared ca certs ...
I0913 18:22:13.372122 14328 certs.go:226] acquiring lock for ca certs: {Name:mk785798fbcf81959753f3319707a0af9d7664a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:13.372244 14328 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.key
I0913 18:22:13.372280 14328 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.key
I0913 18:22:13.372288 14328 certs.go:256] generating profile certs ...
I0913 18:22:13.372336 14328 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key
I0913 18:22:13.372357 14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt with IP's: []
I0913 18:22:13.472121 14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt ...
I0913 18:22:13.472150 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.crt: {Name:mka4c529ecd82dac1d339ecc23df92e7a4b5760a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:13.472277 14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key ...
I0913 18:22:13.472288 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/client.key: {Name:mkf3ba81e5c3f62106e3bc734fad28e18b450c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:13.472383 14328 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6
I0913 18:22:13.472399 14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
I0913 18:22:13.720539 14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
I0913 18:22:13.720572 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk5d0e0561d9e7c5ac5b0fdadeb29312aa6ba98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:13.720719 14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
I0913 18:22:13.720731 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mka43a7eac85ae832b2eead542b6c36553cf716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:13.720782 14328 certs.go:381] copying /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt
I0913 18:22:13.720853 14328 certs.go:385] copying /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key
I0913 18:22:13.720903 14328 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key
I0913 18:22:13.720917 14328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0913 18:22:14.033549 14328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt ...
I0913 18:22:14.033579 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt: {Name:mk71d7e14da716aec8f7fbf2afe69fe41263189b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:14.033719 14328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key ...
I0913 18:22:14.033729 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key: {Name:mkf9e7d94dfc211aa294d98fdcb9b5236622ab84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:14.033875 14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca-key.pem (1679 bytes)
I0913 18:22:14.033908 14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/ca.pem (1078 bytes)
I0913 18:22:14.033930 14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/cert.pem (1123 bytes)
I0913 18:22:14.033950 14328 certs.go:484] found cert: /home/jenkins/minikube-integration/19636-3707/.minikube/certs/key.pem (1679 bytes)
I0913 18:22:14.034512 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0913 18:22:14.034634 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1013393034 /var/lib/minikube/certs/ca.crt
I0913 18:22:14.043563 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0913 18:22:14.043688 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1187429901 /var/lib/minikube/certs/ca.key
I0913 18:22:14.052884 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0913 18:22:14.053014 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1829648559 /var/lib/minikube/certs/proxy-client-ca.crt
I0913 18:22:14.060896 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0913 18:22:14.061034 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4232937632 /var/lib/minikube/certs/proxy-client-ca.key
I0913 18:22:14.070657 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0913 18:22:14.070784 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4007334980 /var/lib/minikube/certs/apiserver.crt
I0913 18:22:14.079321 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0913 18:22:14.079434 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1181490233 /var/lib/minikube/certs/apiserver.key
I0913 18:22:14.088981 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0913 18:22:14.089090 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3362529697 /var/lib/minikube/certs/proxy-client.crt
I0913 18:22:14.097305 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0913 18:22:14.097426 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4137465544 /var/lib/minikube/certs/proxy-client.key
I0913 18:22:14.105511 14328 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0913 18:22:14.105534 14328 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.105570 14328 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.113102 14328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19636-3707/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0913 18:22:14.113243 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4179127001 /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.121481 14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0913 18:22:14.121594 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1380107211 /var/lib/minikube/kubeconfig
I0913 18:22:14.129827 14328 exec_runner.go:51] Run: openssl version
I0913 18:22:14.132478 14328 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0913 18:22:14.141222 14328 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.142509 14328 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 13 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.142550 14328 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0913 18:22:14.145249 14328 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0913 18:22:14.153446 14328 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0913 18:22:14.154522 14328 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0913 18:22:14.154558 14328 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0913 18:22:14.154670 14328 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0913 18:22:14.169618 14328 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0913 18:22:14.178435 14328 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0913 18:22:14.187095 14328 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 18:22:14.209512 14328 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0913 18:22:14.218144 14328 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0913 18:22:14.218164 14328 kubeadm.go:157] found existing configuration files:
I0913 18:22:14.218205 14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0913 18:22:14.226329 14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0913 18:22:14.226384 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0913 18:22:14.233697 14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0913 18:22:14.242804 14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0913 18:22:14.242861 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0913 18:22:14.250768 14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0913 18:22:14.260192 14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0913 18:22:14.260253 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0913 18:22:14.269276 14328 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0913 18:22:14.278504 14328 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0913 18:22:14.278559 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0913 18:22:14.286865 14328 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0913 18:22:14.320061 14328 kubeadm.go:310] W0913 18:22:14.319926 15232 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0913 18:22:14.320526 14328 kubeadm.go:310] W0913 18:22:14.320475 15232 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0913 18:22:14.322067 14328 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0913 18:22:14.322103 14328 kubeadm.go:310] [preflight] Running pre-flight checks
I0913 18:22:14.413966 14328 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0913 18:22:14.414091 14328 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0913 18:22:14.414108 14328 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0913 18:22:14.414116 14328 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0913 18:22:14.424391 14328 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0913 18:22:14.427855 14328 out.go:235] - Generating certificates and keys ...
I0913 18:22:14.427902 14328 kubeadm.go:310] [certs] Using existing ca certificate authority
I0913 18:22:14.427918 14328 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0913 18:22:14.624930 14328 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0913 18:22:15.040758 14328 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0913 18:22:15.162408 14328 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0913 18:22:15.658152 14328 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0913 18:22:16.025339 14328 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0913 18:22:16.025416 14328 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0913 18:22:16.120885 14328 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0913 18:22:16.120989 14328 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0913 18:22:16.372802 14328 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0913 18:22:16.510885 14328 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0913 18:22:16.945085 14328 kubeadm.go:310] [certs] Generating "sa" key and public key
I0913 18:22:16.945228 14328 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0913 18:22:17.084123 14328 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0913 18:22:17.300113 14328 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0913 18:22:17.553031 14328 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0913 18:22:17.682710 14328 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0913 18:22:17.780326 14328 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0913 18:22:17.780872 14328 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0913 18:22:17.783119 14328 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0913 18:22:17.785707 14328 out.go:235] - Booting up control plane ...
I0913 18:22:17.785740 14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0913 18:22:17.785762 14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0913 18:22:17.786072 14328 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0913 18:22:17.808012 14328 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0913 18:22:17.812401 14328 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0913 18:22:17.812425 14328 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0913 18:22:18.061184 14328 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0913 18:22:18.061207 14328 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0913 18:22:19.062988 14328 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001791222s
I0913 18:22:19.063011 14328 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0913 18:22:23.564817 14328 kubeadm.go:310] [api-check] The API server is healthy after 4.501698381s
I0913 18:22:23.577028 14328 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0913 18:22:23.587239 14328 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0913 18:22:23.607756 14328 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0913 18:22:23.607779 14328 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0913 18:22:23.617294 14328 kubeadm.go:310] [bootstrap-token] Using token: sldh4y.yuhm3u7inwrmozvf
I0913 18:22:23.618670 14328 out.go:235] - Configuring RBAC rules ...
I0913 18:22:23.618699 14328 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0913 18:22:23.622263 14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0913 18:22:23.627835 14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0913 18:22:23.631523 14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0913 18:22:23.634289 14328 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0913 18:22:23.636831 14328 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0913 18:22:23.971592 14328 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0913 18:22:24.404971 14328 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0913 18:22:24.972154 14328 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0913 18:22:24.973197 14328 kubeadm.go:310]
I0913 18:22:24.973216 14328 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0913 18:22:24.973221 14328 kubeadm.go:310]
I0913 18:22:24.973226 14328 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0913 18:22:24.973237 14328 kubeadm.go:310]
I0913 18:22:24.973241 14328 kubeadm.go:310] mkdir -p $HOME/.kube
I0913 18:22:24.973245 14328 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0913 18:22:24.973258 14328 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0913 18:22:24.973262 14328 kubeadm.go:310]
I0913 18:22:24.973266 14328 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0913 18:22:24.973271 14328 kubeadm.go:310]
I0913 18:22:24.973276 14328 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0913 18:22:24.973284 14328 kubeadm.go:310]
I0913 18:22:24.973288 14328 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0913 18:22:24.973295 14328 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0913 18:22:24.973300 14328 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0913 18:22:24.973307 14328 kubeadm.go:310]
I0913 18:22:24.973313 14328 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0913 18:22:24.973319 14328 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0913 18:22:24.973328 14328 kubeadm.go:310]
I0913 18:22:24.973335 14328 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sldh4y.yuhm3u7inwrmozvf \
I0913 18:22:24.973341 14328 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:961bd654f095ef1a6d147d71a918dd0b71b1322a66f9fb78ac53da26dd6c0c4c \
I0913 18:22:24.973345 14328 kubeadm.go:310] --control-plane
I0913 18:22:24.973350 14328 kubeadm.go:310]
I0913 18:22:24.973357 14328 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0913 18:22:24.973361 14328 kubeadm.go:310]
I0913 18:22:24.973368 14328 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sldh4y.yuhm3u7inwrmozvf \
I0913 18:22:24.973373 14328 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:961bd654f095ef1a6d147d71a918dd0b71b1322a66f9fb78ac53da26dd6c0c4c
I0913 18:22:24.976295 14328 cni.go:84] Creating CNI manager for ""
I0913 18:22:24.976324 14328 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 18:22:24.979363 14328 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0913 18:22:24.980696 14328 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0913 18:22:24.991712 14328 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0913 18:22:24.991877 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3692986788 /etc/cni/net.d/1-k8s.conflist
I0913 18:22:25.004762 14328 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0913 18:22:25.004824 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:25.004885 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_13T18_22_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0913 18:22:25.013361 14328 ops.go:34] apiserver oom_adj: -16
I0913 18:22:25.082888 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:25.583265 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:26.083471 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:26.583482 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:27.082955 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:27.582949 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:28.083147 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:28.583108 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:29.083583 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:29.583491 14328 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 18:22:29.673230 14328 kubeadm.go:1113] duration metric: took 4.668457432s to wait for elevateKubeSystemPrivileges
I0913 18:22:29.673261 14328 kubeadm.go:394] duration metric: took 15.518707703s to StartCluster
I0913 18:22:29.673282 14328 settings.go:142] acquiring lock: {Name:mk98196c8c447c4d1ddda32c1e2d671af91b86c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:29.673336 14328 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19636-3707/kubeconfig
I0913 18:22:29.673909 14328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19636-3707/kubeconfig: {Name:mk8dbe36e5fbf6af14c0274573a74465da65b6cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 18:22:29.674107 14328 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0913 18:22:29.674189 14328 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0913 18:22:29.674323 14328 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0913 18:22:29.674337 14328 addons.go:69] Setting yakd=true in profile "minikube"
I0913 18:22:29.674351 14328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0913 18:22:29.674356 14328 addons.go:234] Setting addon yakd=true in "minikube"
I0913 18:22:29.674358 14328 addons.go:69] Setting registry=true in profile "minikube"
I0913 18:22:29.674372 14328 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0913 18:22:29.674389 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.674385 14328 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:22:29.674398 14328 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0913 18:22:29.674407 14328 addons.go:69] Setting metrics-server=true in profile "minikube"
I0913 18:22:29.674417 14328 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0913 18:22:29.674427 14328 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0913 18:22:29.674429 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.674431 14328 addons.go:69] Setting volcano=true in profile "minikube"
I0913 18:22:29.674439 14328 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0913 18:22:29.674440 14328 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0913 18:22:29.674446 14328 addons.go:234] Setting addon volcano=true in "minikube"
I0913 18:22:29.674454 14328 mustload.go:65] Loading cluster: minikube
I0913 18:22:29.674469 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.674472 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.674631 14328 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 18:22:29.674663 14328 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0913 18:22:29.674696 14328 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0913 18:22:29.674721 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.674968 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.674983 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675016 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.675090 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675094 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675094 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675109 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675111 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675110 14328 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0913 18:22:29.675118 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675123 14328 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0913 18:22:29.675130 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675142 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.674419 14328 addons.go:234] Setting addon metrics-server=true in "minikube"
I0913 18:22:29.675145 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.675158 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.675165 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.675290 14328 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0913 18:22:29.675323 14328 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0913 18:22:29.675346 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.675660 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675675 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675707 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.675760 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675771 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675800 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.675800 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675813 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675846 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.674398 14328 addons.go:234] Setting addon registry=true in "minikube"
I0913 18:22:29.675946 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.675962 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.675976 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.675994 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.674407 14328 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0913 18:22:29.676143 14328 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0913 18:22:29.675142 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.676313 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.676327 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.676357 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.674430 14328 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0913 18:22:29.676457 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.676744 14328 out.go:177] * Configuring local host environment ...
I0913 18:22:29.677068 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.677089 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.677117 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0913 18:22:29.678452 14328 out.go:270] *
W0913 18:22:29.678479 14328 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0913 18:22:29.678493 14328 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0913 18:22:29.678501 14328 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0913 18:22:29.678514 14328 out.go:270] *
W0913 18:22:29.678567 14328 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0913 18:22:29.678581 14328 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0913 18:22:29.678593 14328 out.go:270] *
W0913 18:22:29.678619 14328 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0913 18:22:29.678630 14328 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0913 18:22:29.678641 14328 out.go:270] *
W0913 18:22:29.678657 14328 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0913 18:22:29.678692 14328 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0913 18:22:29.675101 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.679264 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.680160 14328 out.go:177] * Verifying Kubernetes components...
I0913 18:22:29.682184 14328 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 18:22:29.699527 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.699591 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.699614 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.699651 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.699909 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.699940 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.699972 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.702358 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.718121 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.718320 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.720220 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.720272 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.720330 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.721701 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.721829 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.721894 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.723328 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.738305 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.740079 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.740105 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.740264 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.740310 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.740549 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.741939 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.744151 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.744173 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.745721 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.746625 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.749445 14328 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0913 18:22:29.749491 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.750172 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.750189 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.750224 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.751303 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.752227 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.752287 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.752726 14328 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0913 18:22:29.753915 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.753937 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.755074 14328 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0913 18:22:29.755127 14328 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0913 18:22:29.755313 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2808512143 /etc/kubernetes/addons/ig-namespace.yaml
I0913 18:22:29.758167 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.758224 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.758953 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.759718 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.759773 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.760908 14328 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0913 18:22:29.764795 14328 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0913 18:22:29.767002 14328 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0913 18:22:29.770013 14328 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0913 18:22:29.770054 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0913 18:22:29.770643 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2613983426 /etc/kubernetes/addons/volcano-deployment.yaml
I0913 18:22:29.771865 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.771928 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.777015 14328 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0913 18:22:29.777041 14328 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0913 18:22:29.777135 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube113624407 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0913 18:22:29.777001 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.777304 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.783109 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.783058 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.783205 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.783433 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.783456 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.784118 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.784136 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.784661 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.793438 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.796158 14328 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0913 18:22:29.799450 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.799478 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.799632 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0913 18:22:29.799680 14328 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0913 18:22:29.799842 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube44809800 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0913 18:22:29.800806 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.811311 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.811376 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.811562 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.811603 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.811907 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.811932 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.812234 14328 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0913 18:22:29.812256 14328 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0913 18:22:29.812383 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1971748866 /etc/kubernetes/addons/ig-role.yaml
I0913 18:22:29.818810 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.818835 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.819179 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.819199 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.820116 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.820164 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.820657 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.820677 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.821351 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.821396 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.823174 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.823624 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.823641 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.825119 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.826139 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0913 18:22:29.826376 14328 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0913 18:22:29.826428 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:29.827250 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:29.827273 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:29.827384 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:29.827824 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.829170 14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0913 18:22:29.829203 14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0913 18:22:29.829332 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520373827 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0913 18:22:29.829402 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.829968 14328 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0913 18:22:29.830008 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0913 18:22:29.830750 14328 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0913 18:22:29.831068 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.832230 14328 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0913 18:22:29.832261 14328 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0913 18:22:29.832398 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2880040134 /etc/kubernetes/addons/metrics-apiservice.yaml
I0913 18:22:29.832914 14328 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0913 18:22:29.833025 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0913 18:22:29.837878 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0913 18:22:29.837907 14328 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 18:22:29.837940 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0913 18:22:29.838098 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4059801455 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 18:22:29.838102 14328 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0913 18:22:29.838140 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0913 18:22:29.838163 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.838182 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.838597 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1727490139 /etc/kubernetes/addons/deployment.yaml
I0913 18:22:29.839854 14328 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0913 18:22:29.839889 14328 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0913 18:22:29.840006 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2728120261 /etc/kubernetes/addons/ig-rolebinding.yaml
I0913 18:22:29.843555 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.845319 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.845346 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.848348 14328 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0913 18:22:29.848342 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0913 18:22:29.848824 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0913 18:22:29.850237 14328 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0913 18:22:29.850383 14328 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0913 18:22:29.850547 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3118347710 /etc/kubernetes/addons/yakd-ns.yaml
I0913 18:22:29.852079 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.852103 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.852444 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.852555 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.852676 14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0913 18:22:29.852698 14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0913 18:22:29.852834 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1090503355 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0913 18:22:29.853134 14328 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0913 18:22:29.854241 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0913 18:22:29.854798 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 18:22:29.856295 14328 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0913 18:22:29.856366 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.856431 14328 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0913 18:22:29.856449 14328 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0913 18:22:29.856455 14328 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0913 18:22:29.856493 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0913 18:22:29.856774 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.858647 14328 out.go:177] - Using image docker.io/registry:2.8.3
I0913 18:22:29.858736 14328 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0913 18:22:29.860307 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0913 18:22:29.860336 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0913 18:22:29.860488 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube109296710 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0913 18:22:29.862262 14328 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0913 18:22:29.863986 14328 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0913 18:22:29.864013 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0913 18:22:29.864135 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1998936989 /etc/kubernetes/addons/registry-rc.yaml
I0913 18:22:29.868672 14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0913 18:22:29.868695 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0913 18:22:29.868795 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1854299642 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0913 18:22:29.871825 14328 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0913 18:22:29.871860 14328 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0913 18:22:29.872002 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3505779699 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0913 18:22:29.873335 14328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0913 18:22:29.873350 14328 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0913 18:22:29.873427 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2794437994 /etc/kubernetes/addons/ig-clusterrole.yaml
I0913 18:22:29.874817 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.874833 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.875546 14328 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0913 18:22:29.875698 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3603856914 /etc/kubernetes/addons/storageclass.yaml
I0913 18:22:29.876623 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:29.881186 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.881601 14328 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0913 18:22:29.881633 14328 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0913 18:22:29.881787 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3111377881 /etc/kubernetes/addons/yakd-sa.yaml
I0913 18:22:29.883741 14328 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0913 18:22:29.885458 14328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0913 18:22:29.885485 14328 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0913 18:22:29.885495 14328 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0913 18:22:29.885539 14328 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0913 18:22:29.891231 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:29.891298 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:29.891702 14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0913 18:22:29.891725 14328 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0913 18:22:29.891826 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1657859118 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0913 18:22:29.893902 14328 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0913 18:22:29.893929 14328 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0913 18:22:29.894050 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3971364297 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0913 18:22:29.898824 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0913 18:22:29.899207 14328 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0913 18:22:29.899238 14328 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0913 18:22:29.899605 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3656227679 /etc/kubernetes/addons/registry-svc.yaml
I0913 18:22:29.908396 14328 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0913 18:22:29.908423 14328 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0913 18:22:29.908536 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2518243723 /etc/kubernetes/addons/metrics-server-service.yaml
I0913 18:22:29.919140 14328 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0913 18:22:29.919183 14328 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0913 18:22:29.919326 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2784403250 /etc/kubernetes/addons/ig-crd.yaml
I0913 18:22:29.924818 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0913 18:22:29.924983 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1732008546 /etc/kubernetes/addons/storage-provisioner.yaml
I0913 18:22:29.929503 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0913 18:22:29.929542 14328 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0913 18:22:29.929691 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2661398203 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0913 18:22:29.936251 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0913 18:22:29.941570 14328 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0913 18:22:29.943109 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0913 18:22:29.943139 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0913 18:22:29.943276 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1692137981 /etc/kubernetes/addons/rbac-hostpath.yaml
I0913 18:22:29.952268 14328 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0913 18:22:29.952301 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0913 18:22:29.952396 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3269589089 /etc/kubernetes/addons/registry-proxy.yaml
I0913 18:22:29.954778 14328 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0913 18:22:29.954820 14328 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0913 18:22:29.954962 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1867590320 /etc/kubernetes/addons/yakd-crb.yaml
I0913 18:22:29.955551 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:29.955581 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:29.962617 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:29.968110 14328 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0913 18:22:29.968139 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0913 18:22:29.968281 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2969284621 /etc/kubernetes/addons/ig-daemonset.yaml
I0913 18:22:29.970354 14328 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0913 18:22:29.974802 14328 out.go:177] - Using image docker.io/busybox:stable
I0913 18:22:29.975865 14328 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 18:22:29.975895 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0913 18:22:29.976023 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube64902857 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 18:22:29.977091 14328 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 18:22:29.977119 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0913 18:22:29.977211 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207215465 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 18:22:29.977313 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0913 18:22:29.979394 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0913 18:22:29.993996 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0913 18:22:29.994035 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0913 18:22:29.994163 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1945848953 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0913 18:22:30.037170 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0913 18:22:30.037203 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0913 18:22:30.037324 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3679622764 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0913 18:22:30.051190 14328 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0913 18:22:30.051225 14328 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0913 18:22:30.051349 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube473149968 /etc/kubernetes/addons/yakd-svc.yaml
I0913 18:22:30.061205 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0913 18:22:30.061239 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0913 18:22:30.061467 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2654012100 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0913 18:22:30.082521 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0913 18:22:30.088061 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 18:22:30.100641 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 18:22:30.114969 14328 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0913 18:22:30.115006 14328 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0913 18:22:30.115139 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1771831967 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0913 18:22:30.152246 14328 exec_runner.go:51] Run: sudo systemctl start kubelet
I0913 18:22:30.157259 14328 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0913 18:22:30.157293 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0913 18:22:30.157429 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3908865038 /etc/kubernetes/addons/yakd-dp.yaml
I0913 18:22:30.200183 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0913 18:22:30.200220 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0913 18:22:30.200369 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1603298593 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0913 18:22:30.209308 14328 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
I0913 18:22:30.212211 14328 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
I0913 18:22:30.212238 14328 node_ready.go:38] duration metric: took 2.87252ms for node "ubuntu-20-agent-9" to be "Ready" ...
I0913 18:22:30.212250 14328 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0913 18:22:30.223084 14328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace to be "Ready" ...
I0913 18:22:30.288261 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0913 18:22:30.288320 14328 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0913 18:22:30.288471 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2900606559 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0913 18:22:30.307848 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0913 18:22:30.422099 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0913 18:22:30.422145 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0913 18:22:30.425414 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3801295454 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0913 18:22:30.480008 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0913 18:22:30.480045 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0913 18:22:30.480171 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube825372909 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0913 18:22:30.554465 14328 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0913 18:22:30.655787 14328 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 18:22:30.655838 14328 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0913 18:22:30.658113 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2424578108 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 18:22:30.722736 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 18:22:30.990963 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.054659565s)
I0913 18:22:30.991005 14328 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0913 18:22:31.064684 14328 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0913 18:22:31.212697 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233268842s)
I0913 18:22:31.274282 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.296884817s)
I0913 18:22:31.274317 14328 addons.go:475] Verifying addon registry=true in "minikube"
I0913 18:22:31.276597 14328 out.go:177] * Verifying registry addon...
I0913 18:22:31.282829 14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0913 18:22:31.293202 14328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0913 18:22:31.293223 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:31.332432 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.249847536s)
I0913 18:22:31.370055 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.28194539s)
I0913 18:22:31.439304 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.131348016s)
I0913 18:22:31.443702 14328 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0913 18:22:31.802504 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:31.860313 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.75955483s)
W0913 18:22:31.860352 14328 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0913 18:22:31.860376 14328 retry.go:31] will retry after 330.814391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0913 18:22:32.191905 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 18:22:32.234247 14328 pod_ready.go:103] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"False"
I0913 18:22:32.287583 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:32.786972 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:32.927699 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.078827355s)
I0913 18:22:33.288515 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:33.336483 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.613663253s)
I0913 18:22:33.336527 14328 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0913 18:22:33.338137 14328 out.go:177] * Verifying csi-hostpath-driver addon...
I0913 18:22:33.340404 14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0913 18:22:33.358755 14328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0913 18:22:33.358783 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:33.786169 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:33.888731 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:34.287866 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:34.388956 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:34.728388 14328 pod_ready.go:103] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"False"
I0913 18:22:34.786725 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:34.889049 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:35.153174 14328 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.961219478s)
I0913 18:22:35.287092 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:35.345937 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:35.729584 14328 pod_ready.go:93] pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:35.729610 14328 pod_ready.go:82] duration metric: took 5.506492573s for pod "coredns-7c65d6cfc9-dzc9p" in "kube-system" namespace to be "Ready" ...
I0913 18:22:35.729623 14328 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace to be "Ready" ...
I0913 18:22:35.787650 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:35.889303 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:36.287011 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:36.417030 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:36.786616 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:36.809096 14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0913 18:22:36.809245 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube950786621 /var/lib/minikube/google_application_credentials.json
I0913 18:22:36.819682 14328 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0913 18:22:36.819796 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2349982424 /var/lib/minikube/google_cloud_project
I0913 18:22:36.831179 14328 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0913 18:22:36.831238 14328 host.go:66] Checking if "minikube" exists ...
I0913 18:22:36.831957 14328 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0913 18:22:36.831987 14328 api_server.go:166] Checking apiserver status ...
I0913 18:22:36.832034 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:36.849525 14328 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15663/cgroup
I0913 18:22:36.862299 14328 api_server.go:182] apiserver freezer: "12:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51"
I0913 18:22:36.862365 14328 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/67a5b3e35d538b2f08638c0a6ba4795a273ac61684645fcb4f6e800e71c66e51/freezer.state
I0913 18:22:36.871948 14328 api_server.go:204] freezer state: "THAWED"
I0913 18:22:36.871974 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:36.876389 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:36.876447 14328 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0913 18:22:36.879649 14328 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0913 18:22:36.881406 14328 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0913 18:22:36.883224 14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0913 18:22:36.883279 14328 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0913 18:22:36.883448 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3119801814 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0913 18:22:36.888205 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:36.893035 14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0913 18:22:36.893069 14328 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0913 18:22:36.893168 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411136400 /etc/kubernetes/addons/gcp-auth-service.yaml
I0913 18:22:36.905142 14328 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 18:22:36.905171 14328 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0913 18:22:36.905300 14328 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1474031396 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 18:22:36.915488 14328 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 18:22:37.252033 14328 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0913 18:22:37.253666 14328 out.go:177] * Verifying gcp-auth addon...
I0913 18:22:37.256155 14328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0913 18:22:37.259244 14328 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0913 18:22:37.286801 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:37.360859 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:37.732737 14328 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-w786s" not found
I0913 18:22:37.732763 14328 pod_ready.go:82] duration metric: took 2.003132575s for pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace to be "Ready" ...
E0913 18:22:37.732773 14328 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-w786s" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-w786s" not found
I0913 18:22:37.732780 14328 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:37.736847 14328 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:37.736865 14328 pod_ready.go:82] duration metric: took 4.07971ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:37.736874 14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:37.740513 14328 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:37.740530 14328 pod_ready.go:82] duration metric: took 3.650368ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:37.740541 14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:37.787033 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:37.844421 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:38.288087 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:38.360794 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:38.786242 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:38.845617 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:39.246503 14328 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:39.246531 14328 pod_ready.go:82] duration metric: took 1.505980997s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:39.246544 14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7h9jz" in "kube-system" namespace to be "Ready" ...
I0913 18:22:39.251115 14328 pod_ready.go:93] pod "kube-proxy-7h9jz" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:39.251133 14328 pod_ready.go:82] duration metric: took 4.581785ms for pod "kube-proxy-7h9jz" in "kube-system" namespace to be "Ready" ...
I0913 18:22:39.251142 14328 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:39.255127 14328 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0913 18:22:39.255150 14328 pod_ready.go:82] duration metric: took 3.998545ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0913 18:22:39.255159 14328 pod_ready.go:39] duration metric: took 9.042897978s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0913 18:22:39.255181 14328 api_server.go:52] waiting for apiserver process to appear ...
I0913 18:22:39.255237 14328 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 18:22:39.276986 14328 api_server.go:72] duration metric: took 9.598254421s to wait for apiserver process to appear ...
I0913 18:22:39.277015 14328 api_server.go:88] waiting for apiserver healthz status ...
I0913 18:22:39.277037 14328 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0913 18:22:39.281385 14328 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0913 18:22:39.282315 14328 api_server.go:141] control plane version: v1.31.1
I0913 18:22:39.282345 14328 api_server.go:131] duration metric: took 5.322484ms to wait for apiserver health ...
I0913 18:22:39.282355 14328 system_pods.go:43] waiting for kube-system pods to appear ...
I0913 18:22:39.286895 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:39.339926 14328 system_pods.go:59] 16 kube-system pods found
I0913 18:22:39.339954 14328 system_pods.go:61] "coredns-7c65d6cfc9-dzc9p" [64712751-8105-4d5b-86b8-5bd2782e3bd9] Running
I0913 18:22:39.339965 14328 system_pods.go:61] "csi-hostpath-attacher-0" [4ec23112-7b72-4bfd-8ff6-973e3b964990] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0913 18:22:39.339973 14328 system_pods.go:61] "csi-hostpath-resizer-0" [57bae9b9-3a16-482a-b527-3e8596fe036a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0913 18:22:39.339984 14328 system_pods.go:61] "csi-hostpathplugin-7rh6q" [9d9397e0-4a8e-4f8e-82f3-6db78c8f1dc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0913 18:22:39.339990 14328 system_pods.go:61] "etcd-ubuntu-20-agent-9" [3ccd7eaf-cf0f-432d-9934-c13ed80108c6] Running
I0913 18:22:39.339998 14328 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [939565f6-c84a-48c8-93b6-e34c15288f83] Running
I0913 18:22:39.340008 14328 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [cbe48c36-d55f-4a82-b1aa-3fd1ce21253c] Running
I0913 18:22:39.340016 14328 system_pods.go:61] "kube-proxy-7h9jz" [52317402-eb63-48c9-8336-46a0844b829a] Running
I0913 18:22:39.340022 14328 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [b80efbfb-025f-46ca-9dd2-05a697e7f31b] Running
I0913 18:22:39.340033 14328 system_pods.go:61] "metrics-server-84c5f94fbc-lkmcp" [5492915c-f03f-42c5-aae6-2a86f778d2cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0913 18:22:39.340046 14328 system_pods.go:61] "nvidia-device-plugin-daemonset-4lxnd" [4a7fb3ca-f619-4ff0-9c91-dff0f066b225] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I0913 18:22:39.340058 14328 system_pods.go:61] "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0913 18:22:39.340069 14328 system_pods.go:61] "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0913 18:22:39.340077 14328 system_pods.go:61] "snapshot-controller-56fcc65765-m7zlt" [225ec73c-c647-4222-980d-449e0b3cdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 18:22:39.340086 14328 system_pods.go:61] "snapshot-controller-56fcc65765-s9gr4" [d9c3fead-79ec-46ad-988c-a79adb6ce2fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 18:22:39.340091 14328 system_pods.go:61] "storage-provisioner" [2d994015-e9ee-437f-9a93-f03abeb1e209] Running
I0913 18:22:39.340099 14328 system_pods.go:74] duration metric: took 57.737104ms to wait for pod list to return data ...
I0913 18:22:39.340106 14328 default_sa.go:34] waiting for default service account to be created ...
I0913 18:22:39.345265 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:39.534610 14328 default_sa.go:45] found service account: "default"
I0913 18:22:39.534638 14328 default_sa.go:55] duration metric: took 194.525056ms for default service account to be created ...
I0913 18:22:39.534650 14328 system_pods.go:116] waiting for k8s-apps to be running ...
I0913 18:22:39.739710 14328 system_pods.go:86] 16 kube-system pods found
I0913 18:22:39.739738 14328 system_pods.go:89] "coredns-7c65d6cfc9-dzc9p" [64712751-8105-4d5b-86b8-5bd2782e3bd9] Running
I0913 18:22:39.739749 14328 system_pods.go:89] "csi-hostpath-attacher-0" [4ec23112-7b72-4bfd-8ff6-973e3b964990] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0913 18:22:39.739757 14328 system_pods.go:89] "csi-hostpath-resizer-0" [57bae9b9-3a16-482a-b527-3e8596fe036a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0913 18:22:39.739767 14328 system_pods.go:89] "csi-hostpathplugin-7rh6q" [9d9397e0-4a8e-4f8e-82f3-6db78c8f1dc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0913 18:22:39.739773 14328 system_pods.go:89] "etcd-ubuntu-20-agent-9" [3ccd7eaf-cf0f-432d-9934-c13ed80108c6] Running
I0913 18:22:39.739782 14328 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [939565f6-c84a-48c8-93b6-e34c15288f83] Running
I0913 18:22:39.739790 14328 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [cbe48c36-d55f-4a82-b1aa-3fd1ce21253c] Running
I0913 18:22:39.739800 14328 system_pods.go:89] "kube-proxy-7h9jz" [52317402-eb63-48c9-8336-46a0844b829a] Running
I0913 18:22:39.739806 14328 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [b80efbfb-025f-46ca-9dd2-05a697e7f31b] Running
I0913 18:22:39.739818 14328 system_pods.go:89] "metrics-server-84c5f94fbc-lkmcp" [5492915c-f03f-42c5-aae6-2a86f778d2cc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0913 18:22:39.739832 14328 system_pods.go:89] "nvidia-device-plugin-daemonset-4lxnd" [4a7fb3ca-f619-4ff0-9c91-dff0f066b225] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I0913 18:22:39.739845 14328 system_pods.go:89] "registry-66c9cd494c-qnsqn" [9d207cfe-fc0d-47fe-ae8e-3720eb38b045] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0913 18:22:39.739857 14328 system_pods.go:89] "registry-proxy-j9v4g" [01e1c35f-1c90-440d-92e7-defa8bfc5517] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0913 18:22:39.739868 14328 system_pods.go:89] "snapshot-controller-56fcc65765-m7zlt" [225ec73c-c647-4222-980d-449e0b3cdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 18:22:39.739879 14328 system_pods.go:89] "snapshot-controller-56fcc65765-s9gr4" [d9c3fead-79ec-46ad-988c-a79adb6ce2fd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 18:22:39.739891 14328 system_pods.go:89] "storage-provisioner" [2d994015-e9ee-437f-9a93-f03abeb1e209] Running
I0913 18:22:39.739904 14328 system_pods.go:126] duration metric: took 205.246456ms to wait for k8s-apps to be running ...
I0913 18:22:39.739917 14328 system_svc.go:44] waiting for kubelet service to be running ....
I0913 18:22:39.739971 14328 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0913 18:22:39.755898 14328 system_svc.go:56] duration metric: took 15.9653ms WaitForService to wait for kubelet
I0913 18:22:39.755929 14328 kubeadm.go:582] duration metric: took 10.077205426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0913 18:22:39.755955 14328 node_conditions.go:102] verifying NodePressure condition ...
I0913 18:22:39.787150 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:39.845041 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:39.934253 14328 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0913 18:22:39.934296 14328 node_conditions.go:123] node cpu capacity is 8
I0913 18:22:39.934311 14328 node_conditions.go:105] duration metric: took 178.349477ms to run NodePressure ...
I0913 18:22:39.934326 14328 start.go:241] waiting for startup goroutines ...
I0913 18:22:40.361267 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:40.362048 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:40.786611 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:40.845949 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:41.286465 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:41.345160 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:41.787153 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:41.845094 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:42.286843 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:42.346439 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:42.786491 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:42.845016 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:43.340659 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:43.344242 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:43.787276 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:43.844802 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:44.442408 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:44.443277 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:44.786463 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:44.844741 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:45.287179 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:45.344440 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:45.786647 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:45.845516 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:46.285693 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:46.361321 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:46.786414 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:46.845443 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:47.286644 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:47.345643 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:47.787165 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:47.844499 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:48.286749 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:48.474499 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:48.787047 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:48.845782 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:49.287056 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:49.344329 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:49.789433 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:49.844958 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:50.360433 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:50.361224 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:50.862766 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:50.863231 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:51.286323 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:51.344199 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:51.787210 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:51.844505 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:52.289171 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:52.390735 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:52.787427 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:52.845425 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:53.286807 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:53.345620 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:53.786585 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:53.845496 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:54.286213 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:54.345608 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:54.786487 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:54.862932 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:55.286671 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:55.345831 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:55.786977 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:55.844950 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:56.287174 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:56.344859 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:56.787602 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:56.845360 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:57.286744 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:57.345529 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:57.787303 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:57.844585 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:58.286887 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:58.345643 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:58.863456 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 18:22:58.864887 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:59.286870 14328 kapi.go:107] duration metric: took 28.004041253s to wait for kubernetes.io/minikube-addons=registry ...
I0913 18:22:59.346033 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:22:59.845320 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:00.344737 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:00.845320 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:01.345258 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:01.845115 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:02.345490 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:02.916539 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:03.345278 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:03.845118 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:04.344658 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:04.845961 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:05.361989 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:05.845142 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:06.361956 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:06.844399 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:07.345310 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:07.895620 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:08.361337 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:08.845567 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:09.344950 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:09.844524 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:10.345780 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:10.845128 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:11.345262 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:11.862386 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:12.345515 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:12.862232 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:13.345604 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:13.844968 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:14.345115 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:14.845814 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:15.345268 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:15.847085 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:16.345863 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:16.861925 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:17.345486 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:17.845293 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 18:23:18.352228 14328 kapi.go:107] duration metric: took 45.011818471s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0913 18:23:59.258969 14328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0913 18:23:59.258991 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:23:59.762060 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:00.259893 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:00.760044 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:01.260622 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:01.759528 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:02.259771 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:02.760292 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:03.259541 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:03.760129 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:04.259059 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:04.759040 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:05.259825 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:05.761115 14328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 18:24:06.260138 14328 kapi.go:107] duration metric: took 1m29.003982108s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0913 18:24:06.261887 14328 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0913 18:24:06.263794 14328 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0913 18:24:06.265156 14328 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0913 18:24:06.266873 14328 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, inspektor-gadget, storage-provisioner-rancher, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0913 18:24:06.268221 14328 addons.go:510] duration metric: took 1m36.594013586s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass metrics-server storage-provisioner inspektor-gadget storage-provisioner-rancher yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0913 18:24:06.268282 14328 start.go:246] waiting for cluster config update ...
I0913 18:24:06.268306 14328 start.go:255] writing updated cluster config ...
I0913 18:24:06.268578 14328 exec_runner.go:51] Run: rm -f paused
I0913 18:24:06.320579 14328 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
I0913 18:24:06.322453 14328 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Sat 2024-09-07 03:35:14 UTC, end at Fri 2024-09-13 18:33:58 UTC. --
Sep 13 18:26:09 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:09.585545451Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:26:09 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:09.588131381Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:26:19 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:26:19Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 13 18:26:21 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:26:21.133920438Z" level=info msg="ignoring event" container=ea98f429e305e5b3ee091caae3ae86c842312f99672a1f167e4cd5b82d730b80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:27:36 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:27:36.594202477Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:27:36 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:27:36.596713132Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:29:10 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:29:10Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.886166883Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.886169720Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 18:29:11 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:11.888799516Z" level=error msg="Error running exec b19d1c667dc12445d7f1d6899eade1b8ea639c4e19eb1689c84ae14d8feb39c1 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 13 18:29:12 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:29:12.106435645Z" level=info msg="ignoring event" container=3a0b79a845ad96abb297eff75dfd6f542f46e16e5a12ae24ae4f9a16acf6f9c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:30:24 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:30:24.587897113Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:30:24 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:30:24.590168854Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 18:32:57 ubuntu-20-agent-9 cri-dockerd[14890]: time="2024-09-13T18:32:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c8d542960c28a5f60ee7b14808a2d0c8b06725da91dde87392ea9a537cf700b1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 13 18:32:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:32:58.102042179Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:32:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:32:58.104381189Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:33:10 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:10.593239793Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:33:10 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:10.595781327Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:33:35 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:35.598152859Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:33:35 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:35.600798103Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.557143568Z" level=info msg="ignoring event" container=c8d542960c28a5f60ee7b14808a2d0c8b06725da91dde87392ea9a537cf700b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.820940911Z" level=info msg="ignoring event" container=86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.880249002Z" level=info msg="ignoring event" container=c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:33:57 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:57.968309282Z" level=info msg="ignoring event" container=15e5bd4de548cae7e5969cab004c1c759e551424d13fbb53ac267520285333da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 18:33:58 ubuntu-20-agent-9 dockerd[14561]: time="2024-09-13T18:33:58.051347560Z" level=info msg="ignoring event" container=0a9dab71ff066747b671cb9aef58137b7acdaa2ef232b2fe727a994871dbf8c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
3a0b79a845ad9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 beea0859cec70 gadget-sm92k
68c150b9fb0f4 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 f3e7bc3d1392f gcp-auth-89d5ffd79-rk6x4
6a15a65f82854 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
064e874d218aa registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
d3a19cb71e4a8 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
933456c5bcb6b registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
a0d29b11b47be registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
6f1346c0a6bf1 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 b145a385c8e1c csi-hostpath-resizer-0
b977c7e284fc5 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 b2e574bf39c31 csi-hostpathplugin-7rh6q
60e5ebb1e8117 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 7fa58fa79b2b3 csi-hostpath-attacher-0
639657c030424 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 d9f3a06f87a7b snapshot-controller-56fcc65765-s9gr4
1af54f0913b72 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 1217f2ce294ed snapshot-controller-56fcc65765-m7zlt
dcf9ed79c0306 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 11 minutes ago Running yakd 0 b71ed978dbd2d yakd-dashboard-67d98fc6b-89dgq
b6b0127df7bb1 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 11 minutes ago Running local-path-provisioner 0 ff6b138e17439 local-path-provisioner-86d989889c-fkb6p
02d59fdb6d6fe registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 11 minutes ago Running metrics-server 0 20deb3afade9e metrics-server-84c5f94fbc-lkmcp
41918d553d7c8 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 d8af440651ce8 nvidia-device-plugin-daemonset-4lxnd
66fed67c8b963 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 622159c2649f8 cloud-spanner-emulator-769b77f747-h9wlf
129df199ba791 6e38f40d628db 11 minutes ago Running storage-provisioner 0 d8b66876bc7b7 storage-provisioner
b6041e483c82c c69fa2e9cbf5f 11 minutes ago Running coredns 0 024e4a894a71e coredns-7c65d6cfc9-dzc9p
7fb4d4de53e46 60c005f310ff3 11 minutes ago Running kube-proxy 0 acfa464adfd09 kube-proxy-7h9jz
90c524e627d39 9aa1fad941575 11 minutes ago Running kube-scheduler 0 410aa9552087f kube-scheduler-ubuntu-20-agent-9
87c623f59ed62 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 1efc08d0cd074 kube-controller-manager-ubuntu-20-agent-9
5ea25fa40b23f 2e96e5913fc06 11 minutes ago Running etcd 0 c595a4f1736c3 etcd-ubuntu-20-agent-9
67a5b3e35d538 6bab7719df100 11 minutes ago Running kube-apiserver 0 439fe09a35b7d kube-apiserver-ubuntu-20-agent-9
==> coredns [b6041e483c82] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:40349 - 16842 "HINFO IN 3748346690135090459.1710582388281534536. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01467965s
[INFO] 10.244.0.23:46320 - 21531 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00030581s
[INFO] 10.244.0.23:48340 - 49562 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168313s
[INFO] 10.244.0.23:57948 - 13450 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108658s
[INFO] 10.244.0.23:56229 - 41882 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143935s
[INFO] 10.244.0.23:49747 - 25314 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009418s
[INFO] 10.244.0.23:44463 - 2018 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143566s
[INFO] 10.244.0.23:49505 - 44756 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003156477s
[INFO] 10.244.0.23:46277 - 34891 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004631362s
[INFO] 10.244.0.23:35146 - 8262 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002662088s
[INFO] 10.244.0.23:41023 - 51039 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003863363s
[INFO] 10.244.0.23:48130 - 8388 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002789013s
[INFO] 10.244.0.23:43062 - 18106 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004005128s
[INFO] 10.244.0.23:36714 - 22414 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001716971s
[INFO] 10.244.0.23:55604 - 22984 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.01011837s
==> describe nodes <==
Name: ubuntu-20-agent-9
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-9
kubernetes.io/os=linux
minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_13T18_22_25_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-9
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 13 Sep 2024 18:22:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-9
AcquireTime: <unset>
RenewTime: Fri, 13 Sep 2024 18:33:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 13 Sep 2024 18:30:02 +0000 Fri, 13 Sep 2024 18:22:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 13 Sep 2024 18:30:02 +0000 Fri, 13 Sep 2024 18:22:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 13 Sep 2024 18:30:02 +0000 Fri, 13 Sep 2024 18:22:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 13 Sep 2024 18:30:02 +0000 Fri, 13 Sep 2024 18:22:22 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.154.0.4
Hostname: ubuntu-20-agent-9
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 4894487b-7b30-e033-3a9d-c6f45b6c4cf8
Boot ID: 12284a47-6cbe-446a-902c-cc7eddd0eaeb
Kernel Version: 5.15.0-1068-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-769b77f747-h9wlf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-sm92k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-rk6x4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m59s
kube-system coredns-7c65d6cfc9-dzc9p 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-7rh6q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-9 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-9 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-9 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-7h9jz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-9 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-lkmcp 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-4lxnd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-m7zlt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-s9gr4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-fkb6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-89dgq 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ed b9 a0 be ce 08 06
[ +1.123945] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 22 ac 13 73 72 08 06
[ +0.023561] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 32 49 1e 41 06 08 06
[Sep13 18:23] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 23 e5 97 ca ad 08 06
[ +2.270764] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a e5 95 ce 38 a0 08 06
[ +2.538963] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a e6 90 6f b1 b0 08 06
[ +6.797879] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff aa b4 29 d5 65 16 08 06
[ +0.088588] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 ef 9a de 80 6b 08 06
[ +0.252507] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 c0 3c a5 9a fb 08 06
[ +27.002758] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff de f6 8b a2 18 41 08 06
[ +0.045129] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 a3 be f3 c8 0a 08 06
[Sep13 18:24] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff c2 b2 7c 7f 0c 1c 08 06
[ +0.000477] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 fd 13 7e 84 af 08 06
==> etcd [5ea25fa40b23] <==
{"level":"info","ts":"2024-09-13T18:22:20.898385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
{"level":"info","ts":"2024-09-13T18:22:20.898391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-13T18:22:20.898399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
{"level":"info","ts":"2024-09-13T18:22:20.898407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-13T18:22:20.899441Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-13T18:22:20.899453Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-13T18:22:20.899446Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T18:22:20.899486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-13T18:22:20.899687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-13T18:22:20.899719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-13T18:22:20.900148Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T18:22:20.900230Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T18:22:20.900261Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T18:22:20.900529Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-13T18:22:20.900727Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-13T18:22:20.901405Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
{"level":"info","ts":"2024-09-13T18:22:20.901524Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-09-13T18:22:48.471685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.977337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2024-09-13T18:22:48.471743Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.431617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
{"level":"info","ts":"2024-09-13T18:22:48.471780Z","caller":"traceutil/trace.go:171","msg":"trace[26277867] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:918; }","duration":"129.093931ms","start":"2024-09-13T18:22:48.342672Z","end":"2024-09-13T18:22:48.471766Z","steps":["trace[26277867] 'range keys from in-memory index tree' (duration: 128.877781ms)"],"step_count":1}
{"level":"info","ts":"2024-09-13T18:22:48.471790Z","caller":"traceutil/trace.go:171","msg":"trace[1115861767] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:918; }","duration":"137.485361ms","start":"2024-09-13T18:22:48.334292Z","end":"2024-09-13T18:22:48.471777Z","steps":["trace[1115861767] 'range keys from in-memory index tree' (duration: 137.239788ms)"],"step_count":1}
{"level":"info","ts":"2024-09-13T18:22:48.585565Z","caller":"traceutil/trace.go:171","msg":"trace[1307810227] transaction","detail":"{read_only:false; response_revision:919; number_of_response:1; }","duration":"110.432471ms","start":"2024-09-13T18:22:48.475112Z","end":"2024-09-13T18:22:48.585544Z","steps":["trace[1307810227] 'process raft request' (duration: 110.222141ms)"],"step_count":1}
{"level":"info","ts":"2024-09-13T18:32:20.916742Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1691}
{"level":"info","ts":"2024-09-13T18:32:20.941506Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1691,"took":"24.243402ms","hash":2666877437,"current-db-size-bytes":8028160,"current-db-size":"8.0 MB","current-db-size-in-use-bytes":4214784,"current-db-size-in-use":"4.2 MB"}
{"level":"info","ts":"2024-09-13T18:32:20.941577Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2666877437,"revision":1691,"compact-revision":-1}
==> gcp-auth [68c150b9fb0f] <==
2024/09/13 18:24:05 GCP Auth Webhook started!
2024/09/13 18:24:21 Ready to marshal response ...
2024/09/13 18:24:21 Ready to write response ...
2024/09/13 18:24:21 Ready to marshal response ...
2024/09/13 18:24:21 Ready to write response ...
2024/09/13 18:24:44 Ready to marshal response ...
2024/09/13 18:24:44 Ready to write response ...
2024/09/13 18:24:45 Ready to marshal response ...
2024/09/13 18:24:45 Ready to write response ...
2024/09/13 18:24:45 Ready to marshal response ...
2024/09/13 18:24:45 Ready to write response ...
2024/09/13 18:32:57 Ready to marshal response ...
2024/09/13 18:32:57 Ready to write response ...
==> kernel <==
18:33:58 up 16 min, 0 users, load average: 0.08, 0.29, 0.33
Linux ubuntu-20-agent-9 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [67a5b3e35d53] <==
W0913 18:23:18.404904 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.38.240:443: connect: connection refused
W0913 18:23:40.273496 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
E0913 18:23:40.273531 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
W0913 18:23:40.300745 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
E0913 18:23:40.300795 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
W0913 18:23:59.232086 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.165.166:443: connect: connection refused
E0913 18:23:59.232129 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.165.166:443: connect: connection refused" logger="UnhandledError"
I0913 18:24:21.595117 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0913 18:24:21.612571 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0913 18:24:35.027340 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0913 18:24:35.032155 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0913 18:24:35.134419 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 18:24:35.181287 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 18:24:35.190535 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0913 18:24:35.340194 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0913 18:24:35.341915 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 18:24:35.367314 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0913 18:24:35.408498 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0913 18:24:36.055218 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0913 18:24:36.243880 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0913 18:24:36.329146 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0913 18:24:36.366746 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0913 18:24:36.366768 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0913 18:24:36.409428 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0913 18:24:36.578861 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [87c623f59ed6] <==
W0913 18:32:47.967749 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:32:47.967791 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:32:50.870333 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:32:50.870378 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:32:51.940442 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:32:51.940488 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:05.315523 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:05.315570 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:05.609278 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:05.609320 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:08.433760 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:08.433803 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:09.786230 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:09.786304 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:21.678124 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:21.678171 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:25.843452 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:25.843494 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:37.224302 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:37.224344 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:41.045683 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:41.045724 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 18:33:42.562921 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 18:33:42.562965 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0913 18:33:57.784797 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.007µs"
==> kube-proxy [7fb4d4de53e4] <==
I0913 18:22:30.465267 1 server_linux.go:66] "Using iptables proxy"
I0913 18:22:30.667673 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
E0913 18:22:30.670494 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0913 18:22:30.745651 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0913 18:22:30.745708 1 server_linux.go:169] "Using iptables Proxier"
I0913 18:22:30.748909 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0913 18:22:30.749253 1 server.go:483] "Version info" version="v1.31.1"
I0913 18:22:30.749269 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0913 18:22:30.756009 1 config.go:199] "Starting service config controller"
I0913 18:22:30.756066 1 shared_informer.go:313] Waiting for caches to sync for service config
I0913 18:22:30.756149 1 config.go:105] "Starting endpoint slice config controller"
I0913 18:22:30.756157 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0913 18:22:30.756972 1 config.go:328] "Starting node config controller"
I0913 18:22:30.756984 1 shared_informer.go:313] Waiting for caches to sync for node config
I0913 18:22:30.857263 1 shared_informer.go:320] Caches are synced for node config
I0913 18:22:30.857302 1 shared_informer.go:320] Caches are synced for service config
I0913 18:22:30.857344 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [90c524e627d3] <==
W0913 18:22:21.784364 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0913 18:22:21.784399 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0913 18:22:21.784392 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0913 18:22:21.784422 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 18:22:21.784471 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0913 18:22:21.784507 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0913 18:22:21.784498 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
E0913 18:22:21.784525 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.613246 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0913 18:22:22.613285 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.664007 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0913 18:22:22.664051 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0913 18:22:22.715046 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0913 18:22:22.715086 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.718574 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0913 18:22:22.718615 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.733036 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0913 18:22:22.733083 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.793057 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0913 18:22:22.793107 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.980044 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0913 18:22:22.980093 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0913 18:22:22.999578 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0913 18:22:22.999619 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0913 18:22:25.782015 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sat 2024-09-07 03:35:14 UTC, end at Fri 2024-09-13 18:33:58 UTC. --
Sep 13 18:33:51 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:51.459152 15819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64d5301d-22c2-4431-8f82-d176079a0e29"
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773826 15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds\") pod \"14b30f2c-4501-4306-a9f9-67206c2861f5\" (UID: \"14b30f2c-4501-4306-a9f9-67206c2861f5\") "
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773910 15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpp5v\" (UniqueName: \"kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v\") pod \"14b30f2c-4501-4306-a9f9-67206c2861f5\" (UID: \"14b30f2c-4501-4306-a9f9-67206c2861f5\") "
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.773997 15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "14b30f2c-4501-4306-a9f9-67206c2861f5" (UID: "14b30f2c-4501-4306-a9f9-67206c2861f5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.776685 15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v" (OuterVolumeSpecName: "kube-api-access-kpp5v") pod "14b30f2c-4501-4306-a9f9-67206c2861f5" (UID: "14b30f2c-4501-4306-a9f9-67206c2861f5"). InnerVolumeSpecName "kube-api-access-kpp5v". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.874858 15819 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14b30f2c-4501-4306-a9f9-67206c2861f5-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 13 18:33:57 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:57.874891 15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kpp5v\" (UniqueName: \"kubernetes.io/projected/14b30f2c-4501-4306-a9f9-67206c2861f5-kube-api-access-kpp5v\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.059735 15819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod01e1c35f-1c90-440d-92e7-defa8bfc5517/0a9dab71ff066747b671cb9aef58137b7acdaa2ef232b2fe727a994871dbf8c2\": RecentStats: unable to find data in memory cache]"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.076685 15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgvq\" (UniqueName: \"kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq\") pod \"9d207cfe-fc0d-47fe-ae8e-3720eb38b045\" (UID: \"9d207cfe-fc0d-47fe-ae8e-3720eb38b045\") "
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.079144 15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq" (OuterVolumeSpecName: "kube-api-access-lqgvq") pod "9d207cfe-fc0d-47fe-ae8e-3720eb38b045" (UID: "9d207cfe-fc0d-47fe-ae8e-3720eb38b045"). InnerVolumeSpecName "kube-api-access-lqgvq". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.177391 15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lqgvq\" (UniqueName: \"kubernetes.io/projected/9d207cfe-fc0d-47fe-ae8e-3720eb38b045-kube-api-access-lqgvq\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.277822 15819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tnb6\" (UniqueName: \"kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6\") pod \"01e1c35f-1c90-440d-92e7-defa8bfc5517\" (UID: \"01e1c35f-1c90-440d-92e7-defa8bfc5517\") "
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.279871 15819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6" (OuterVolumeSpecName: "kube-api-access-7tnb6") pod "01e1c35f-1c90-440d-92e7-defa8bfc5517" (UID: "01e1c35f-1c90-440d-92e7-defa8bfc5517"). InnerVolumeSpecName "kube-api-access-7tnb6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.321526 15819 scope.go:117] "RemoveContainer" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.341116 15819 scope.go:117] "RemoveContainer" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.342642 15819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081" containerID="c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.342709 15819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"} err="failed to get container status \"c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081\": rpc error: code = Unknown desc = Error response from daemon: No such container: c9c84ec4f65cdcbebf4234a85f803c6cc79246ec4afff0f73b35633503692081"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.342755 15819 scope.go:117] "RemoveContainer" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.367581 15819 scope.go:117] "RemoveContainer" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: E0913 18:33:58.368579 15819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8" containerID="86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.368628 15819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"} err="failed to get container status \"86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 86683995c5f6ad779d34f3ee4fa84dc3c0362707202bf40313bb27e26fb1a2b8"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.378194 15819 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7tnb6\" (UniqueName: \"kubernetes.io/projected/01e1c35f-1c90-440d-92e7-defa8bfc5517-kube-api-access-7tnb6\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.469680 15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e1c35f-1c90-440d-92e7-defa8bfc5517" path="/var/lib/kubelet/pods/01e1c35f-1c90-440d-92e7-defa8bfc5517/volumes"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.470002 15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b30f2c-4501-4306-a9f9-67206c2861f5" path="/var/lib/kubelet/pods/14b30f2c-4501-4306-a9f9-67206c2861f5/volumes"
Sep 13 18:33:58 ubuntu-20-agent-9 kubelet[15819]: I0913 18:33:58.470186 15819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d207cfe-fc0d-47fe-ae8e-3720eb38b045" path="/var/lib/kubelet/pods/9d207cfe-fc0d-47fe-ae8e-3720eb38b045/volumes"
==> storage-provisioner [129df199ba79] <==
I0913 18:22:32.243137 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0913 18:22:32.254080 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0913 18:22:32.254132 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0913 18:22:32.265872 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0913 18:22:32.266068 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4!
I0913 18:22:32.268098 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ebc40ac-d000-4dbb-a657-8fb345c1c3c9", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4 became leader
I0913 18:22:32.366685 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_f1861157-ef62-4af4-8fbc-179a6d9017f4!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-9/10.154.0.4
Start Time: Fri, 13 Sep 2024 18:24:45 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gvhpp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-gvhpp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-9
Normal Pulling 7m50s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m50s (x4 over 9m14s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m50s (x4 over 9m14s) kubelet Error: ErrImagePull
Warning Failed 7m24s (x6 over 9m14s) kubelet Error: ImagePullBackOff
Normal BackOff 4m2s (x21 over 9m14s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.88s)