=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.540706ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002739316s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00387983s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080372302s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/23 23:50:22 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:36161 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:38 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:40 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 23 Sep 24 23:40 UTC | 23 Sep 24 23:41 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/23 23:38:50
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 23:38:50.270310 18432 out.go:345] Setting OutFile to fd 1 ...
I0923 23:38:50.270435 18432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:38:50.270445 18432 out.go:358] Setting ErrFile to fd 2...
I0923 23:38:50.270452 18432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:38:50.270607 18432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7453/.minikube/bin
I0923 23:38:50.271160 18432 out.go:352] Setting JSON to false
I0923 23:38:50.272047 18432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1279,"bootTime":1727133451,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0923 23:38:50.272131 18432 start.go:139] virtualization: kvm guest
I0923 23:38:50.274166 18432 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I0923 23:38:50.275376 18432 notify.go:220] Checking for updates...
I0923 23:38:50.275382 18432 out.go:177] - MINIKUBE_LOCATION=19696
W0923 23:38:50.275349 18432 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7453/.minikube/cache/preloaded-tarball: no such file or directory
I0923 23:38:50.278047 18432 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 23:38:50.279401 18432 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19696-7453/kubeconfig
I0923 23:38:50.280644 18432 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7453/.minikube
I0923 23:38:50.281888 18432 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0923 23:38:50.283113 18432 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0923 23:38:50.284474 18432 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 23:38:50.294637 18432 out.go:177] * Using the none driver based on user configuration
I0923 23:38:50.295837 18432 start.go:297] selected driver: none
I0923 23:38:50.295850 18432 start.go:901] validating driver "none" against <nil>
I0923 23:38:50.295861 18432 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 23:38:50.295904 18432 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0923 23:38:50.296265 18432 out.go:270] ! The 'none' driver does not respect the --memory flag
I0923 23:38:50.296793 18432 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0923 23:38:50.297039 18432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 23:38:50.297066 18432 cni.go:84] Creating CNI manager for ""
I0923 23:38:50.297112 18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:38:50.297129 18432 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0923 23:38:50.297164 18432 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 23:38:50.298512 18432 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0923 23:38:50.299942 18432 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json ...
I0923 23:38:50.299976 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json: {Name:mkfc6f5cf141c223524c7eb348a8ed535e6b41a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:50.300111 18432 start.go:360] acquireMachinesLock for minikube: {Name:mk6e7fa6ceaa90ef14fbf41d1e1dd11e8c8d9b57 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 23:38:50.300152 18432 start.go:364] duration metric: took 26.721µs to acquireMachinesLock for "minikube"
I0923 23:38:50.300171 18432 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 23:38:50.300238 18432 start.go:125] createHost starting for "" (driver="none")
I0923 23:38:50.301666 18432 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0923 23:38:50.302759 18432 exec_runner.go:51] Run: systemctl --version
I0923 23:38:50.305235 18432 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0923 23:38:50.305281 18432 client.go:168] LocalClient.Create starting
I0923 23:38:50.305331 18432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca.pem
I0923 23:38:50.305372 18432 main.go:141] libmachine: Decoding PEM data...
I0923 23:38:50.305394 18432 main.go:141] libmachine: Parsing certificate...
I0923 23:38:50.305462 18432 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19696-7453/.minikube/certs/cert.pem
I0923 23:38:50.305496 18432 main.go:141] libmachine: Decoding PEM data...
I0923 23:38:50.305518 18432 main.go:141] libmachine: Parsing certificate...
I0923 23:38:50.305859 18432 client.go:171] duration metric: took 570.687µs to LocalClient.Create
I0923 23:38:50.305883 18432 start.go:167] duration metric: took 648.675µs to libmachine.API.Create "minikube"
I0923 23:38:50.305890 18432 start.go:293] postStartSetup for "minikube" (driver="none")
I0923 23:38:50.305932 18432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 23:38:50.305976 18432 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 23:38:50.315767 18432 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0923 23:38:50.315802 18432 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0923 23:38:50.315815 18432 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0923 23:38:50.317962 18432 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0923 23:38:50.319205 18432 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7453/.minikube/addons for local assets ...
I0923 23:38:50.319245 18432 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7453/.minikube/files for local assets ...
I0923 23:38:50.319262 18432 start.go:296] duration metric: took 13.366723ms for postStartSetup
I0923 23:38:50.319826 18432 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/config.json ...
I0923 23:38:50.319957 18432 start.go:128] duration metric: took 19.710134ms to createHost
I0923 23:38:50.319970 18432 start.go:83] releasing machines lock for "minikube", held for 19.807645ms
I0923 23:38:50.320297 18432 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0923 23:38:50.320391 18432 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0923 23:38:50.322722 18432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0923 23:38:50.323012 18432 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0923 23:38:50.331718 18432 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0923 23:38:50.331751 18432 start.go:495] detecting cgroup driver to use...
I0923 23:38:50.331791 18432 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 23:38:50.331898 18432 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 23:38:50.350698 18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0923 23:38:50.359770 18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0923 23:38:50.369309 18432 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0923 23:38:50.369355 18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0923 23:38:50.377783 18432 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 23:38:50.386277 18432 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0923 23:38:50.395920 18432 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0923 23:38:50.404729 18432 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0923 23:38:50.413504 18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0923 23:38:50.422318 18432 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0923 23:38:50.430466 18432 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0923 23:38:50.438360 18432 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0923 23:38:50.446030 18432 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0923 23:38:50.452656 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:38:50.658282 18432 exec_runner.go:51] Run: sudo systemctl restart containerd
I0923 23:38:50.725669 18432 start.go:495] detecting cgroup driver to use...
I0923 23:38:50.725715 18432 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0923 23:38:50.725822 18432 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0923 23:38:50.743617 18432 exec_runner.go:51] Run: which cri-dockerd
I0923 23:38:50.744518 18432 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0923 23:38:50.752162 18432 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0923 23:38:50.752183 18432 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 23:38:50.752209 18432 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 23:38:50.758892 18432 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0923 23:38:50.759019 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube362157187 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0923 23:38:50.766279 18432 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0923 23:38:50.970256 18432 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0923 23:38:51.169441 18432 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0923 23:38:51.169616 18432 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0923 23:38:51.169631 18432 exec_runner.go:203] rm: /etc/docker/daemon.json
I0923 23:38:51.169677 18432 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0923 23:38:51.177306 18432 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0923 23:38:51.177455 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1428434658 /etc/docker/daemon.json
I0923 23:38:51.185851 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:38:51.390709 18432 exec_runner.go:51] Run: sudo systemctl restart docker
I0923 23:38:51.685272 18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0923 23:38:51.695857 18432 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0923 23:38:51.711715 18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 23:38:51.721689 18432 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0923 23:38:51.925576 18432 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0923 23:38:52.122645 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:38:52.317327 18432 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0923 23:38:52.330835 18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0923 23:38:52.341324 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:38:52.543478 18432 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0923 23:38:52.609748 18432 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0923 23:38:52.609807 18432 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0923 23:38:52.611110 18432 start.go:563] Will wait 60s for crictl version
I0923 23:38:52.611142 18432 exec_runner.go:51] Run: which crictl
I0923 23:38:52.611931 18432 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0923 23:38:52.642081 18432 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0923 23:38:52.642135 18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 23:38:52.662180 18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 23:38:52.684113 18432 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0923 23:38:52.684188 18432 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0923 23:38:52.686821 18432 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0923 23:38:52.688021 18432 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0923 23:38:52.688115 18432 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 23:38:52.688125 18432 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0923 23:38:52.688210 18432 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0923 23:38:52.688247 18432 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0923 23:38:52.734183 18432 cni.go:84] Creating CNI manager for ""
I0923 23:38:52.734205 18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:38:52.734214 18432 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0923 23:38:52.734233 18432 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0923 23:38:52.734372 18432 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0923 23:38:52.734433 18432 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0923 23:38:52.742593 18432 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0923 23:38:52.742639 18432 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0923 23:38:52.751014 18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0923 23:38:52.751017 18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0923 23:38:52.751057 18432 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0923 23:38:52.751065 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0923 23:38:52.751065 18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0923 23:38:52.751106 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0923 23:38:52.762918 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0923 23:38:52.798293 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3588237207 /var/lib/minikube/binaries/v1.31.1/kubectl
I0923 23:38:52.806913 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3526895639 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0923 23:38:52.840509 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1168637573 /var/lib/minikube/binaries/v1.31.1/kubelet
I0923 23:38:52.904582 18432 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0923 23:38:52.912454 18432 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0923 23:38:52.912473 18432 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 23:38:52.912507 18432 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 23:38:52.919643 18432 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0923 23:38:52.919794 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1995903575 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0923 23:38:52.928167 18432 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0923 23:38:52.928183 18432 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0923 23:38:52.928212 18432 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0923 23:38:52.935405 18432 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0923 23:38:52.935603 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3360056781 /lib/systemd/system/kubelet.service
I0923 23:38:52.943449 18432 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0923 23:38:52.943547 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2971247699 /var/tmp/minikube/kubeadm.yaml.new
I0923 23:38:52.951161 18432 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0923 23:38:52.952477 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:38:53.161085 18432 exec_runner.go:51] Run: sudo systemctl start kubelet
I0923 23:38:53.175699 18432 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube for IP: 10.138.0.48
I0923 23:38:53.175726 18432 certs.go:194] generating shared ca certs ...
I0923 23:38:53.175748 18432 certs.go:226] acquiring lock for ca certs: {Name:mk3948639b4bfbef52e479ad0192b298c7e79629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.176002 18432 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.key
I0923 23:38:53.176080 18432 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.key
I0923 23:38:53.176094 18432 certs.go:256] generating profile certs ...
I0923 23:38:53.176160 18432 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key
I0923 23:38:53.176177 18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt with IP's: []
I0923 23:38:53.282171 18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt ...
I0923 23:38:53.282198 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.crt: {Name:mk19d3f7393d5385c274c75a2b427d7742ec5ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.282323 18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key ...
I0923 23:38:53.282333 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/client.key: {Name:mkd1d47c747a30b58f8f2d3871133d0fcc0a8eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.282400 18432 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a
I0923 23:38:53.282414 18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0923 23:38:53.412602 18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0923 23:38:53.412632 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mkd235de71de07cef6bb7559bfcd80420fdebba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.412766 18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0923 23:38:53.412777 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkd77e84c0de664396583aa1df4aabcb182fad66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.412828 18432 certs.go:381] copying /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt
I0923 23:38:53.412898 18432 certs.go:385] copying /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key
I0923 23:38:53.412947 18432 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key
I0923 23:38:53.412960 18432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0923 23:38:53.480889 18432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt ...
I0923 23:38:53.480919 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt: {Name:mkabfef810d894a1c07fb8f4032a43d22b3a3c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.481037 18432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key ...
I0923 23:38:53.481047 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key: {Name:mkddb05aa7d0f840b3d5b215353336e49719ffc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:38:53.481201 18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca-key.pem (1679 bytes)
I0923 23:38:53.481231 18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/ca.pem (1078 bytes)
I0923 23:38:53.481260 18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/cert.pem (1123 bytes)
I0923 23:38:53.481286 18432 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7453/.minikube/certs/key.pem (1679 bytes)
I0923 23:38:53.481898 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0923 23:38:53.482016 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1818874626 /var/lib/minikube/certs/ca.crt
I0923 23:38:53.490636 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0923 23:38:53.490751 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3177926111 /var/lib/minikube/certs/ca.key
I0923 23:38:53.498069 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0923 23:38:53.498175 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1135613230 /var/lib/minikube/certs/proxy-client-ca.crt
I0923 23:38:53.505441 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0923 23:38:53.505531 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1903075806 /var/lib/minikube/certs/proxy-client-ca.key
I0923 23:38:53.512617 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0923 23:38:53.512722 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube982993425 /var/lib/minikube/certs/apiserver.crt
I0923 23:38:53.519683 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0923 23:38:53.519829 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3881294579 /var/lib/minikube/certs/apiserver.key
I0923 23:38:53.527203 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0923 23:38:53.527312 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4243338006 /var/lib/minikube/certs/proxy-client.crt
I0923 23:38:53.535151 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0923 23:38:53.535248 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2141483979 /var/lib/minikube/certs/proxy-client.key
I0923 23:38:53.542239 18432 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0923 23:38:53.542253 18432 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.542278 18432 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.549237 18432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7453/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0923 23:38:53.549349 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2495434143 /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.556709 18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0923 23:38:53.556801 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube721483269 /var/lib/minikube/kubeconfig
I0923 23:38:53.564306 18432 exec_runner.go:51] Run: openssl version
I0923 23:38:53.567002 18432 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0923 23:38:53.574776 18432 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.576064 18432 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.576099 18432 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0923 23:38:53.578676 18432 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0923 23:38:53.586464 18432 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0923 23:38:53.587521 18432 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0923 23:38:53.587552 18432 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 23:38:53.587649 18432 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0923 23:38:53.602349 18432 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0923 23:38:53.611003 18432 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0923 23:38:53.618532 18432 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0923 23:38:53.639425 18432 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0923 23:38:53.646815 18432 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0923 23:38:53.646844 18432 kubeadm.go:157] found existing configuration files:
I0923 23:38:53.646881 18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0923 23:38:53.654004 18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0923 23:38:53.654041 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0923 23:38:53.661523 18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0923 23:38:53.668570 18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0923 23:38:53.668611 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0923 23:38:53.675461 18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0923 23:38:53.682863 18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0923 23:38:53.682896 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0923 23:38:53.689686 18432 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0923 23:38:53.697813 18432 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0923 23:38:53.697848 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0923 23:38:53.704762 18432 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0923 23:38:53.735530 18432 kubeadm.go:310] W0923 23:38:53.735418 19311 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 23:38:53.736122 18432 kubeadm.go:310] W0923 23:38:53.736082 19311 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0923 23:38:53.737735 18432 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0923 23:38:53.737802 18432 kubeadm.go:310] [preflight] Running pre-flight checks
I0923 23:38:53.828653 18432 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0923 23:38:53.828731 18432 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0923 23:38:53.828739 18432 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0923 23:38:53.828744 18432 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0923 23:38:53.839942 18432 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0923 23:38:53.843310 18432 out.go:235] - Generating certificates and keys ...
I0923 23:38:53.843348 18432 kubeadm.go:310] [certs] Using existing ca certificate authority
I0923 23:38:53.843361 18432 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0923 23:38:53.945679 18432 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0923 23:38:54.013620 18432 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0923 23:38:54.171505 18432 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0923 23:38:54.397875 18432 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0923 23:38:54.536475 18432 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0923 23:38:54.536660 18432 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0923 23:38:54.651107 18432 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0923 23:38:54.651226 18432 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0923 23:38:54.826594 18432 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0923 23:38:54.958063 18432 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0923 23:38:55.185496 18432 kubeadm.go:310] [certs] Generating "sa" key and public key
I0923 23:38:55.185674 18432 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0923 23:38:55.290988 18432 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0923 23:38:55.491177 18432 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0923 23:38:55.580863 18432 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0923 23:38:55.768520 18432 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0923 23:38:55.900360 18432 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0923 23:38:55.900917 18432 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0923 23:38:55.903139 18432 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0923 23:38:55.905066 18432 out.go:235] - Booting up control plane ...
I0923 23:38:55.905097 18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0923 23:38:55.905118 18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0923 23:38:55.905467 18432 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0923 23:38:55.927117 18432 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0923 23:38:55.931087 18432 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0923 23:38:55.931107 18432 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0923 23:38:56.149006 18432 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0923 23:38:56.149045 18432 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0923 23:38:56.650524 18432 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.498698ms
I0923 23:38:56.650549 18432 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0923 23:39:01.151800 18432 kubeadm.go:310] [api-check] The API server is healthy after 4.50125584s
I0923 23:39:01.161767 18432 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0923 23:39:01.171166 18432 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0923 23:39:01.185870 18432 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0923 23:39:01.185897 18432 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0923 23:39:01.192984 18432 kubeadm.go:310] [bootstrap-token] Using token: 8apy58.p47gjyqdfoakrmhq
I0923 23:39:01.194627 18432 out.go:235] - Configuring RBAC rules ...
I0923 23:39:01.194654 18432 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0923 23:39:01.197083 18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0923 23:39:01.202508 18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0923 23:39:01.204670 18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0923 23:39:01.206819 18432 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0923 23:39:01.208888 18432 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0923 23:39:01.557088 18432 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0923 23:39:01.976738 18432 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0923 23:39:02.557773 18432 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0923 23:39:02.558566 18432 kubeadm.go:310]
I0923 23:39:02.558575 18432 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0923 23:39:02.558581 18432 kubeadm.go:310]
I0923 23:39:02.558586 18432 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0923 23:39:02.558589 18432 kubeadm.go:310]
I0923 23:39:02.558593 18432 kubeadm.go:310] mkdir -p $HOME/.kube
I0923 23:39:02.558597 18432 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0923 23:39:02.558601 18432 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0923 23:39:02.558605 18432 kubeadm.go:310]
I0923 23:39:02.558608 18432 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0923 23:39:02.558611 18432 kubeadm.go:310]
I0923 23:39:02.558615 18432 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0923 23:39:02.558618 18432 kubeadm.go:310]
I0923 23:39:02.558622 18432 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0923 23:39:02.558625 18432 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0923 23:39:02.558628 18432 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0923 23:39:02.558631 18432 kubeadm.go:310]
I0923 23:39:02.558641 18432 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0923 23:39:02.558645 18432 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0923 23:39:02.558650 18432 kubeadm.go:310]
I0923 23:39:02.558653 18432 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8apy58.p47gjyqdfoakrmhq \
I0923 23:39:02.558659 18432 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:db47f7bc1500c7cae7d7c11015704d36d474e91b604b6cfa650231ef586748b8 \
I0923 23:39:02.558663 18432 kubeadm.go:310] --control-plane
I0923 23:39:02.558668 18432 kubeadm.go:310]
I0923 23:39:02.558679 18432 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0923 23:39:02.558683 18432 kubeadm.go:310]
I0923 23:39:02.558687 18432 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8apy58.p47gjyqdfoakrmhq \
I0923 23:39:02.558691 18432 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:db47f7bc1500c7cae7d7c11015704d36d474e91b604b6cfa650231ef586748b8
I0923 23:39:02.561374 18432 cni.go:84] Creating CNI manager for ""
I0923 23:39:02.561398 18432 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 23:39:02.563032 18432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0923 23:39:02.564265 18432 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0923 23:39:02.574525 18432 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0923 23:39:02.574656 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1688956124 /etc/cni/net.d/1-k8s.conflist
I0923 23:39:02.584150 18432 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0923 23:39:02.584196 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:02.584228 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_23T23_39_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0923 23:39:02.593188 18432 ops.go:34] apiserver oom_adj: -16
I0923 23:39:02.653032 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:03.153273 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:03.653372 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:04.153389 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:04.653242 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:05.154035 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:05.653420 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:06.154011 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:06.653107 18432 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0923 23:39:06.719087 18432 kubeadm.go:1113] duration metric: took 4.134931087s to wait for elevateKubeSystemPrivileges
I0923 23:39:06.719125 18432 kubeadm.go:394] duration metric: took 13.131573818s to StartCluster
I0923 23:39:06.719148 18432 settings.go:142] acquiring lock: {Name:mk8828190f1928b74029f5e970e6ecd99a25cc97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:39:06.719217 18432 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19696-7453/kubeconfig
I0923 23:39:06.719833 18432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7453/kubeconfig: {Name:mka6608b58d27d209fca19aaae65767ddd8ef430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0923 23:39:06.720048 18432 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0923 23:39:06.720097 18432 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0923 23:39:06.720224 18432 addons.go:69] Setting yakd=true in profile "minikube"
I0923 23:39:06.720239 18432 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0923 23:39:06.720248 18432 addons.go:234] Setting addon yakd=true in "minikube"
I0923 23:39:06.720253 18432 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0923 23:39:06.720253 18432 addons.go:69] Setting volcano=true in profile "minikube"
I0923 23:39:06.720250 18432 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0923 23:39:06.720273 18432 addons.go:234] Setting addon volcano=true in "minikube"
I0923 23:39:06.720289 18432 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0923 23:39:06.720304 18432 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0923 23:39:06.720310 18432 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0923 23:39:06.720318 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.720321 18432 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0923 23:39:06.720322 18432 addons.go:69] Setting registry=true in profile "minikube"
I0923 23:39:06.720312 18432 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0923 23:39:06.720335 18432 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0923 23:39:06.720337 18432 addons.go:234] Setting addon registry=true in "minikube"
I0923 23:39:06.720339 18432 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0923 23:39:06.720338 18432 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:39:06.720345 18432 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0923 23:39:06.720351 18432 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0923 23:39:06.720362 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.720371 18432 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0923 23:39:06.720376 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.720288 18432 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0923 23:39:06.720413 18432 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0923 23:39:06.720339 18432 mustload.go:65] Loading cluster: minikube
I0923 23:39:06.720437 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.720356 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.720672 18432 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:39:06.720280 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.721066 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.721066 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.720229 18432 addons.go:69] Setting metrics-server=true in profile "minikube"
I0923 23:39:06.720325 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.721467 18432 addons.go:234] Setting addon metrics-server=true in "minikube"
I0923 23:39:06.721519 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.721539 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.721559 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.721617 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.721907 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.721930 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.721962 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.722079 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.722102 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.722141 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.722323 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.722343 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.722374 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.720280 18432 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0923 23:39:06.723404 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.721080 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.720280 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.723861 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.723907 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.723958 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.724745 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.724778 18432 out.go:177] * Configuring local host environment ...
I0923 23:39:06.720327 18432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0923 23:39:06.725452 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.725462 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.725487 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.725618 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.725645 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.725649 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.725689 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.724791 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.721472 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.726266 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.726279 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.726281 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.726311 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0923 23:39:06.726342 18432 out.go:270] *
W0923 23:39:06.726361 18432 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0923 23:39:06.726375 18432 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0923 23:39:06.726385 18432 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0923 23:39:06.726391 18432 out.go:270] *
W0923 23:39:06.726429 18432 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0923 23:39:06.726447 18432 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0923 23:39:06.726455 18432 out.go:270] *
W0923 23:39:06.726484 18432 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0923 23:39:06.726724 18432 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0923 23:39:06.726733 18432 out.go:270] *
W0923 23:39:06.726741 18432 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0923 23:39:06.726779 18432 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0923 23:39:06.726340 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.727921 18432 out.go:177] * Verifying Kubernetes components...
I0923 23:39:06.729794 18432 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0923 23:39:06.740819 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.742516 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.742883 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.744466 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.760037 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.760169 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.760197 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.760482 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.760507 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.760540 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.761984 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.762007 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.762043 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.771939 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.772013 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.773637 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.773719 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.774318 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.774369 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.785246 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.785320 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.785343 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.785450 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.785470 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.785488 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.786334 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.786397 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.787073 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.787100 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.792679 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.793230 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.793259 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.794669 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0923 23:39:06.794723 18432 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0923 23:39:06.794743 18432 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0923 23:39:06.796161 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0923 23:39:06.796192 18432 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:39:06.798648 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0923 23:39:06.799846 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0923 23:39:06.801013 18432 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0923 23:39:06.801691 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.801738 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.802262 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.802323 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.802491 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.802528 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.803422 18432 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0923 23:39:06.804804 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0923 23:39:06.804812 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.805962 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.807241 18432 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0923 23:39:06.807891 18432 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0923 23:39:06.807920 18432 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0923 23:39:06.808052 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2924540067 /etc/kubernetes/addons/ig-namespace.yaml
I0923 23:39:06.808268 18432 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0923 23:39:06.808282 18432 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:39:06.808300 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 23:39:06.808320 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:39:06.808324 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0923 23:39:06.808476 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube414764368 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0923 23:39:06.809509 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.809550 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.809621 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.809664 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.809771 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.809811 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.814895 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.814943 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.820580 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.820603 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.823904 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.823956 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.825262 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.826154 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.826175 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.826481 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.826503 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.826935 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.826952 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.826970 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0923 23:39:06.827101 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube559590903 /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:39:06.827140 18432 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0923 23:39:06.828138 18432 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 23:39:06.828187 18432 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0923 23:39:06.828344 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671716264 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0923 23:39:06.828447 18432 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 23:39:06.828468 18432 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0923 23:39:06.828630 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3001632555 /etc/kubernetes/addons/metrics-apiservice.yaml
I0923 23:39:06.829740 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 23:39:06.829766 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0923 23:39:06.829902 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3417521862 /etc/kubernetes/addons/rbac-hostpath.yaml
I0923 23:39:06.830102 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.830117 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.831140 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.831185 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.833619 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.834900 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.835145 18432 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0923 23:39:06.835183 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.836502 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.837088 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.837983 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.844083 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.838780 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.844166 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.844217 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.842605 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0923 23:39:06.843573 18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 23:39:06.844575 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0923 23:39:06.843605 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 23:39:06.844639 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0923 23:39:06.843617 18432 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0923 23:39:06.845240 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3451982674 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0923 23:39:06.845265 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1270455008 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0923 23:39:06.845541 18432 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0923 23:39:06.846476 18432 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0923 23:39:06.846507 18432 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0923 23:39:06.847150 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 23:39:06.847177 18432 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0923 23:39:06.847306 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1294110379 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0923 23:39:06.847740 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.847762 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.848077 18432 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0923 23:39:06.848103 18432 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0923 23:39:06.848208 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2507528624 /etc/kubernetes/addons/yakd-ns.yaml
I0923 23:39:06.848401 18432 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 23:39:06.848423 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0923 23:39:06.848728 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3231521951 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 23:39:06.848924 18432 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0923 23:39:06.848946 18432 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0923 23:39:06.849054 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2557109985 /etc/kubernetes/addons/ig-role.yaml
I0923 23:39:06.851711 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.851956 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.855694 18432 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0923 23:39:06.855696 18432 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0923 23:39:06.860776 18432 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0923 23:39:06.860810 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0923 23:39:06.860933 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1389798496 /etc/kubernetes/addons/deployment.yaml
I0923 23:39:06.862698 18432 out.go:177] - Using image docker.io/registry:2.8.3
I0923 23:39:06.862918 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.862939 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.868687 18432 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0923 23:39:06.868716 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0923 23:39:06.868858 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3302855751 /etc/kubernetes/addons/registry-rc.yaml
I0923 23:39:06.873084 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.873107 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.876751 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 23:39:06.876777 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0923 23:39:06.876911 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube801791274 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0923 23:39:06.878473 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.878526 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.881301 18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 23:39:06.881323 18432 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0923 23:39:06.881427 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1946111745 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0923 23:39:06.884379 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0923 23:39:06.886585 18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 23:39:06.886609 18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0923 23:39:06.886721 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3958992613 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0923 23:39:06.889033 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0923 23:39:06.892041 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.892063 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.892457 18432 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0923 23:39:06.892480 18432 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0923 23:39:06.892603 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3191309814 /etc/kubernetes/addons/yakd-sa.yaml
I0923 23:39:06.898188 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.898210 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.898531 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.901243 18432 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 23:39:06.901268 18432 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0923 23:39:06.901373 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3857214416 /etc/kubernetes/addons/ig-rolebinding.yaml
I0923 23:39:06.903112 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.903866 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.904026 18432 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0923 23:39:06.905040 18432 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0923 23:39:06.905072 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:06.905603 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:06.905621 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:06.905651 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:06.907984 18432 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0923 23:39:06.910887 18432 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0923 23:39:06.913355 18432 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0923 23:39:06.913392 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0923 23:39:06.913876 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1734807291 /etc/kubernetes/addons/volcano-deployment.yaml
I0923 23:39:06.918550 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 23:39:06.918581 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0923 23:39:06.918696 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube690570401 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0923 23:39:06.919332 18432 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0923 23:39:06.919359 18432 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0923 23:39:06.919472 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3858344043 /etc/kubernetes/addons/yakd-crb.yaml
I0923 23:39:06.924516 18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 23:39:06.924542 18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0923 23:39:06.924685 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube215688014 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0923 23:39:06.928641 18432 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0923 23:39:06.928675 18432 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0923 23:39:06.928789 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3260173872 /etc/kubernetes/addons/registry-svc.yaml
I0923 23:39:06.934067 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0923 23:39:06.934327 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:06.935575 18432 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0923 23:39:06.935609 18432 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0923 23:39:06.935707 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1037723576 /etc/kubernetes/addons/yakd-svc.yaml
I0923 23:39:06.938890 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.938964 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.940690 18432 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 23:39:06.940715 18432 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0923 23:39:06.940816 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube79155311 /etc/kubernetes/addons/ig-clusterrole.yaml
I0923 23:39:06.942190 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 23:39:06.942230 18432 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0923 23:39:06.942345 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4114817471 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0923 23:39:06.945849 18432 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 23:39:06.945874 18432 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0923 23:39:06.945991 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2151043050 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0923 23:39:06.953050 18432 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0923 23:39:06.953078 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0923 23:39:06.953199 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3891720001 /etc/kubernetes/addons/registry-proxy.yaml
I0923 23:39:06.954790 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:06.954851 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:06.959232 18432 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0923 23:39:06.959259 18432 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0923 23:39:06.959382 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050564571 /etc/kubernetes/addons/metrics-server-service.yaml
I0923 23:39:06.968039 18432 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0923 23:39:06.968064 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0923 23:39:06.968175 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1091340882 /etc/kubernetes/addons/yakd-dp.yaml
I0923 23:39:06.972151 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0923 23:39:06.977110 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:06.977136 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:06.980817 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 23:39:06.980851 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0923 23:39:06.981658 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:06.981698 18432 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0923 23:39:06.981710 18432 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0923 23:39:06.981717 18432 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0923 23:39:06.981725 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2665118350 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0923 23:39:06.981752 18432 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0923 23:39:06.987388 18432 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 23:39:06.987416 18432 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0923 23:39:06.987529 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1138063822 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0923 23:39:06.991400 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0923 23:39:06.999292 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 23:39:06.999322 18432 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0923 23:39:06.999456 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1860913612 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0923 23:39:07.001249 18432 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0923 23:39:07.001381 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4223334441 /etc/kubernetes/addons/storageclass.yaml
I0923 23:39:07.001437 18432 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 23:39:07.001463 18432 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0923 23:39:07.001945 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2361636232 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0923 23:39:07.003454 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:07.003480 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:07.009215 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:07.012303 18432 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0923 23:39:07.014129 18432 out.go:177] - Using image docker.io/busybox:stable
I0923 23:39:07.015539 18432 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 23:39:07.015573 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0923 23:39:07.015705 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2554745198 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 23:39:07.018480 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0923 23:39:07.021258 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 23:39:07.021283 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0923 23:39:07.021396 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2809208321 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0923 23:39:07.032669 18432 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:39:07.032699 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0923 23:39:07.032812 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1623682355 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:39:07.036622 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0923 23:39:07.041372 18432 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0923 23:39:07.041405 18432 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0923 23:39:07.041518 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube24433330 /etc/kubernetes/addons/ig-crd.yaml
I0923 23:39:07.042501 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0923 23:39:07.073625 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:39:07.086049 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 23:39:07.086084 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0923 23:39:07.086205 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3932984417 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0923 23:39:07.087480 18432 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0923 23:39:07.087512 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0923 23:39:07.087646 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube147410042 /etc/kubernetes/addons/ig-daemonset.yaml
I0923 23:39:07.119854 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0923 23:39:07.165787 18432 exec_runner.go:51] Run: sudo systemctl start kubelet
I0923 23:39:07.222326 18432 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 23:39:07.222363 18432 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0923 23:39:07.222477 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1296068170 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 23:39:07.241784 18432 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0923 23:39:07.245050 18432 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0923 23:39:07.245071 18432 node_ready.go:38] duration metric: took 3.25547ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0923 23:39:07.245081 18432 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 23:39:07.260165 18432 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:07.273022 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0923 23:39:07.534380 18432 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0923 23:39:07.715409 18432 addons.go:475] Verifying addon registry=true in "minikube"
I0923 23:39:07.717282 18432 out.go:177] * Verifying registry addon...
I0923 23:39:07.720495 18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0923 23:39:07.726316 18432 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0923 23:39:07.726340 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:07.839991 18432 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0923 23:39:07.971761 18432 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0923 23:39:08.040127 18432 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0923 23:39:08.232025 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:08.253708 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.133761281s)
I0923 23:39:08.323991 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.287327075s)
I0923 23:39:08.733123 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:08.740375 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.666695369s)
W0923 23:39:08.740416 18432 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 23:39:08.740442 18432 retry.go:31] will retry after 219.083787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0923 23:39:08.963513 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0923 23:39:09.225770 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:09.270117 18432 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:09.726328 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:09.928156 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.994026972s)
I0923 23:39:10.181443 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.908362172s)
I0923 23:39:10.181478 18432 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0923 23:39:10.185061 18432 out.go:177] * Verifying csi-hostpath-driver addon...
I0923 23:39:10.187609 18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 23:39:10.193237 18432 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 23:39:10.193259 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:10.224661 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:10.692702 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:10.724859 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:11.192316 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:11.224183 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:11.692272 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:11.724201 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:11.765789 18432 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:11.840227 18432 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.876658089s)
I0923 23:39:12.192974 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:12.223967 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:12.265359 18432 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:12.265381 18432 pod_ready.go:82] duration metric: took 5.005186338s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:12.265393 18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:12.692862 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:12.723929 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:13.193570 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:13.224267 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:13.692069 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:13.791398 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:13.922243 18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0923 23:39:13.922399 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1371447207 /var/lib/minikube/google_application_credentials.json
I0923 23:39:13.931942 18432 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0923 23:39:13.932039 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3333738616 /var/lib/minikube/google_cloud_project
I0923 23:39:13.941782 18432 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0923 23:39:13.941824 18432 host.go:66] Checking if "minikube" exists ...
I0923 23:39:13.942284 18432 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0923 23:39:13.942300 18432 api_server.go:166] Checking apiserver status ...
I0923 23:39:13.942322 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:13.960256 18432 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19739/cgroup
I0923 23:39:13.969734 18432 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959"
I0923 23:39:13.969779 18432 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/b014ea59d4af905212415818deb1684eeefe76ca09a0ed68e2be86743ddb9959/freezer.state
I0923 23:39:13.978287 18432 api_server.go:204] freezer state: "THAWED"
I0923 23:39:13.978309 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:14.106768 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:14.106839 18432 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0923 23:39:14.165180 18432 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0923 23:39:14.192183 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:14.224069 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:14.235499 18432 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0923 23:39:14.270435 18432 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0923 23:39:14.293575 18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 23:39:14.293654 18432 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0923 23:39:14.293821 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube132876625 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0923 23:39:14.303855 18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 23:39:14.303879 18432 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0923 23:39:14.303966 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3630155931 /etc/kubernetes/addons/gcp-auth-service.yaml
I0923 23:39:14.313033 18432 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 23:39:14.313058 18432 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0923 23:39:14.313150 18432 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3471904788 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 23:39:14.320727 18432 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0923 23:39:14.691900 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:14.708272 18432 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0923 23:39:14.709916 18432 out.go:177] * Verifying gcp-auth addon...
I0923 23:39:14.712116 18432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0923 23:39:14.790727 18432 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 23:39:14.791288 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:15.191600 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:15.223733 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:15.692223 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:15.792263 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:16.191835 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:16.223471 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:16.270879 18432 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:16.270901 18432 pod_ready.go:82] duration metric: took 4.005499205s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.270914 18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.274783 18432 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:16.274803 18432 pod_ready.go:82] duration metric: took 3.882154ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.274813 18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9p26" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.278663 18432 pod_ready.go:93] pod "kube-proxy-k9p26" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:16.278682 18432 pod_ready.go:82] duration metric: took 3.86294ms for pod "kube-proxy-k9p26" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.278690 18432 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.282632 18432 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:16.282650 18432 pod_ready.go:82] duration metric: took 3.953566ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.282662 18432 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.286261 18432 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace has status "Ready":"True"
I0923 23:39:16.286276 18432 pod_ready.go:82] duration metric: took 3.607653ms for pod "nvidia-device-plugin-daemonset-2wnr8" in "kube-system" namespace to be "Ready" ...
I0923 23:39:16.286285 18432 pod_ready.go:39] duration metric: took 9.04119029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0923 23:39:16.286304 18432 api_server.go:52] waiting for apiserver process to appear ...
I0923 23:39:16.286363 18432 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 23:39:16.304812 18432 api_server.go:72] duration metric: took 9.57797864s to wait for apiserver process to appear ...
I0923 23:39:16.304838 18432 api_server.go:88] waiting for apiserver healthz status ...
I0923 23:39:16.304859 18432 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0923 23:39:16.308958 18432 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0923 23:39:16.309815 18432 api_server.go:141] control plane version: v1.31.1
I0923 23:39:16.309838 18432 api_server.go:131] duration metric: took 4.992795ms to wait for apiserver health ...
I0923 23:39:16.309847 18432 system_pods.go:43] waiting for kube-system pods to appear ...
I0923 23:39:16.475191 18432 system_pods.go:59] 16 kube-system pods found
I0923 23:39:16.475222 18432 system_pods.go:61] "coredns-7c65d6cfc9-48st5" [d679e0bc-9afa-45d5-8d47-aa413a0cf466] Running
I0923 23:39:16.475233 18432 system_pods.go:61] "csi-hostpath-attacher-0" [c8ff58d3-6250-4451-9018-4b11f9fec10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 23:39:16.475242 18432 system_pods.go:61] "csi-hostpath-resizer-0" [2e6f2fcb-0de8-48b2-9727-cfe783496221] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 23:39:16.475255 18432 system_pods.go:61] "csi-hostpathplugin-h6hck" [94d367c6-48a7-48f2-8752-2e842cd7aba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 23:39:16.475262 18432 system_pods.go:61] "etcd-ubuntu-20-agent-2" [af379b5d-6311-48f9-9300-2c244eb7c693] Running
I0923 23:39:16.475269 18432 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [8c1f258a-baa6-4a1f-9783-0fdfc0c40cb8] Running
I0923 23:39:16.475274 18432 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [d567df93-d95c-4845-845b-799bf9f14489] Running
I0923 23:39:16.475278 18432 system_pods.go:61] "kube-proxy-k9p26" [5e7867c1-dea7-4107-bd2a-995730bcc143] Running
I0923 23:39:16.475283 18432 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [1d54e9fe-7ed0-45b2-bf4d-77eb49e6f2ce] Running
I0923 23:39:16.475290 18432 system_pods.go:61] "metrics-server-84c5f94fbc-kfb6d" [ac1f00cd-4fff-4140-a995-8627eed03faf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 23:39:16.475296 18432 system_pods.go:61] "nvidia-device-plugin-daemonset-2wnr8" [1965e7c3-c30f-45a0-9555-6b2c4506d582] Running
I0923 23:39:16.475305 18432 system_pods.go:61] "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 23:39:16.475316 18432 system_pods.go:61] "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 23:39:16.475334 18432 system_pods.go:61] "snapshot-controller-56fcc65765-5hqd5" [d8a0657b-64c7-4669-a78d-336260cb986c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:16.475346 18432 system_pods.go:61] "snapshot-controller-56fcc65765-68n75" [8076a2dd-75ee-4755-be9c-da981c1711e1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:16.475353 18432 system_pods.go:61] "storage-provisioner" [9b61b607-d66c-485c-abe5-004021445c34] Running
I0923 23:39:16.475363 18432 system_pods.go:74] duration metric: took 165.507567ms to wait for pod list to return data ...
I0923 23:39:16.475372 18432 default_sa.go:34] waiting for default service account to be created ...
I0923 23:39:16.669368 18432 default_sa.go:45] found service account: "default"
I0923 23:39:16.669394 18432 default_sa.go:55] duration metric: took 194.015552ms for default service account to be created ...
I0923 23:39:16.669405 18432 system_pods.go:116] waiting for k8s-apps to be running ...
I0923 23:39:16.692336 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:16.724190 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:16.876135 18432 system_pods.go:86] 16 kube-system pods found
I0923 23:39:16.876165 18432 system_pods.go:89] "coredns-7c65d6cfc9-48st5" [d679e0bc-9afa-45d5-8d47-aa413a0cf466] Running
I0923 23:39:16.876178 18432 system_pods.go:89] "csi-hostpath-attacher-0" [c8ff58d3-6250-4451-9018-4b11f9fec10d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0923 23:39:16.876200 18432 system_pods.go:89] "csi-hostpath-resizer-0" [2e6f2fcb-0de8-48b2-9727-cfe783496221] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0923 23:39:16.876224 18432 system_pods.go:89] "csi-hostpathplugin-h6hck" [94d367c6-48a7-48f2-8752-2e842cd7aba4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0923 23:39:16.876233 18432 system_pods.go:89] "etcd-ubuntu-20-agent-2" [af379b5d-6311-48f9-9300-2c244eb7c693] Running
I0923 23:39:16.876245 18432 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [8c1f258a-baa6-4a1f-9783-0fdfc0c40cb8] Running
I0923 23:39:16.876252 18432 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [d567df93-d95c-4845-845b-799bf9f14489] Running
I0923 23:39:16.876260 18432 system_pods.go:89] "kube-proxy-k9p26" [5e7867c1-dea7-4107-bd2a-995730bcc143] Running
I0923 23:39:16.876266 18432 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [1d54e9fe-7ed0-45b2-bf4d-77eb49e6f2ce] Running
I0923 23:39:16.876277 18432 system_pods.go:89] "metrics-server-84c5f94fbc-kfb6d" [ac1f00cd-4fff-4140-a995-8627eed03faf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0923 23:39:16.876282 18432 system_pods.go:89] "nvidia-device-plugin-daemonset-2wnr8" [1965e7c3-c30f-45a0-9555-6b2c4506d582] Running
I0923 23:39:16.876294 18432 system_pods.go:89] "registry-66c9cd494c-jh4zk" [1fd26fe1-569a-41d8-bd27-41ea6d31c232] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0923 23:39:16.876307 18432 system_pods.go:89] "registry-proxy-twks8" [b1bc2a37-dafc-48f7-94a2-b80e57e12b9a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0923 23:39:16.876319 18432 system_pods.go:89] "snapshot-controller-56fcc65765-5hqd5" [d8a0657b-64c7-4669-a78d-336260cb986c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:16.876329 18432 system_pods.go:89] "snapshot-controller-56fcc65765-68n75" [8076a2dd-75ee-4755-be9c-da981c1711e1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0923 23:39:16.876336 18432 system_pods.go:89] "storage-provisioner" [9b61b607-d66c-485c-abe5-004021445c34] Running
I0923 23:39:16.876344 18432 system_pods.go:126] duration metric: took 206.933146ms to wait for k8s-apps to be running ...
I0923 23:39:16.876358 18432 system_svc.go:44] waiting for kubelet service to be running ....
I0923 23:39:16.876408 18432 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0923 23:39:16.892504 18432 system_svc.go:56] duration metric: took 16.137748ms WaitForService to wait for kubelet
I0923 23:39:16.892533 18432 kubeadm.go:582] duration metric: took 10.16570392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 23:39:16.892558 18432 node_conditions.go:102] verifying NodePressure condition ...
I0923 23:39:17.070022 18432 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0923 23:39:17.070054 18432 node_conditions.go:123] node cpu capacity is 8
I0923 23:39:17.070067 18432 node_conditions.go:105] duration metric: took 177.503592ms to run NodePressure ...
I0923 23:39:17.070080 18432 start.go:241] waiting for startup goroutines ...
I0923 23:39:17.216332 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:17.316382 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:17.692120 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:17.792614 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:18.192784 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:18.223025 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:18.691562 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:18.723416 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:19.194346 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:19.294030 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:19.691069 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:19.723771 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:20.192176 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:20.223909 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:20.692794 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:20.723470 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:21.193180 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:21.224390 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:21.692832 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:21.792568 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:22.192434 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:22.223509 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:22.692853 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:22.723726 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:23.192626 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:23.224712 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:23.692906 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:23.723678 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:24.192473 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:24.223258 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:24.692103 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:24.723978 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:25.192336 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:25.292291 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:25.691571 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:25.723694 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:26.192462 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:26.246005 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:26.691729 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:26.723514 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0923 23:39:27.191684 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:27.223122 18432 kapi.go:107] duration metric: took 19.502629386s to wait for kubernetes.io/minikube-addons=registry ...
I0923 23:39:27.692737 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:28.192690 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:28.692075 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:29.193410 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:29.691597 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:30.192393 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:30.691681 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:31.191296 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:31.691931 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:32.194863 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:32.691498 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:33.192361 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:33.692299 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:34.192000 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:34.692419 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:35.191641 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:35.692918 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:36.192362 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:36.691476 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:37.191757 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:37.692404 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:38.192158 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:38.693252 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:39.193949 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:39.692581 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:40.192828 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:40.693165 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:41.191766 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:41.692471 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:42.215665 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:42.692365 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:43.191464 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:43.692155 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:44.191831 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:44.692392 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:45.192481 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:45.692916 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:46.191724 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0923 23:39:46.692239 18432 kapi.go:107] duration metric: took 36.504628597s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0923 23:39:56.216092 18432 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0923 23:39:56.216113 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:56.715405 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:57.215731 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:57.715701 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:58.215390 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:58.715137 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:59.215180 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:39:59.715278 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:00.215042 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:00.715066 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:01.215017 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:01.715811 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:02.215953 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:02.715771 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:03.215704 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:03.715507 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:04.215734 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:04.715585 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:05.215480 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:05.716019 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:06.215355 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:06.715100 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:07.215530 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:07.715695 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:08.215477 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:08.715664 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:09.215462 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:09.715496 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:10.215207 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:10.715529 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:11.215670 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:11.715827 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:12.215453 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:12.715372 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:13.215030 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:13.715660 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:14.215673 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:14.715349 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:15.215117 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:15.715414 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:16.215132 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:16.715481 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:17.215570 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:17.715439 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:18.215767 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:18.716284 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:19.215632 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:19.735946 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:20.215341 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:20.715598 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:21.215481 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:21.715536 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:22.215523 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:22.715465 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:23.215134 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:23.716198 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:24.215294 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:24.714766 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:25.215416 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:25.715836 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:26.215861 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:26.715470 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:27.215573 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:27.715734 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:28.215953 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:28.716140 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:29.215581 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:29.715490 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:30.215233 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:30.715388 18432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0923 23:40:31.215442 18432 kapi.go:107] duration metric: took 1m16.503323021s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0923 23:40:31.217054 18432 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0923 23:40:31.218297 18432 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0923 23:40:31.219525 18432 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0923 23:40:31.220904 18432 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0923 23:40:31.222355 18432 addons.go:510] duration metric: took 1m24.502265042s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass yakd metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0923 23:40:31.222394 18432 start.go:246] waiting for cluster config update ...
I0923 23:40:31.222413 18432 start.go:255] writing updated cluster config ...
I0923 23:40:31.222679 18432 exec_runner.go:51] Run: rm -f paused
I0923 23:40:31.267257 18432 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0923 23:40:31.268917 18432 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Thu 2024-08-15 05:18:14 UTC, end at Mon 2024-09-23 23:50:22 UTC. --
Sep 23 23:42:43 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:42:43.145372192Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c9a5deaf674a17a9 traceID=2aba2725a5071a96cb2fbdcd3c2db75c
Sep 23 23:44:08 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:44:08.084064973Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5868ea084aa34a52 traceID=acdec5092f371a52f1d12e6b63f998f1
Sep 23 23:44:08 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:44:08.086229392Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5868ea084aa34a52 traceID=acdec5092f371a52f1d12e6b63f998f1
Sep 23 23:45:18 ubuntu-20-agent-2 cri-dockerd[18979]: time="2024-09-23T23:45:18Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.414401073Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.414457526Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.416206399Z" level=error msg="Error running exec edf08d6a51874c2cc307dba7aafd45ecd1649de40226fcd4639e8d561491d403 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=8357d8297aafdea0 traceID=e2228613a7e517f91b777742dad825b3
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.479647904Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.479647939Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.481392045Z" level=error msg="Error running exec 5e89eaea9880caf4c543e16f8d9b8bd6666dafe9d337451b531ddb99b0bd5fd6 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=132479b1a7a19163 traceID=7bde93eb3e1cd634caf0b80fe9b11aa4
Sep 23 23:45:19 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:45:19.615285814Z" level=info msg="ignoring event" container=83dfa51e56c75dfb4b0faefa8ea4ceae7d1479e97caeb54566a9ddce5e26bf57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:46:57 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:46:57.099934441Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d5482e3990f09be6 traceID=46ea8e0bad4f3739a365ffcb07300c3f
Sep 23 23:46:57 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:46:57.102318894Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d5482e3990f09be6 traceID=46ea8e0bad4f3739a365ffcb07300c3f
Sep 23 23:49:22 ubuntu-20-agent-2 cri-dockerd[18979]: time="2024-09-23T23:49:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a27ef64abfb24c0f9177fca73fcdc9d1332c537d5cd6a37974dff83d28718954/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 23 23:49:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:22.765133118Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b6bbcec57dca1f3 traceID=742e77a588309ed738e63d2c982cdb67
Sep 23 23:49:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:22.767151099Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b6bbcec57dca1f3 traceID=742e77a588309ed738e63d2c982cdb67
Sep 23 23:49:34 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:34.089611907Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2d70a1dc9c2856d1 traceID=53ebcb273cf7d25966b7f8e7b77225fd
Sep 23 23:49:34 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:49:34.091487335Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2d70a1dc9c2856d1 traceID=53ebcb273cf7d25966b7f8e7b77225fd
Sep 23 23:50:00 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:00.095039419Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=79911499c8fe6a0b traceID=4c0206d04cecb99a7480c331c5b6cc0c
Sep 23 23:50:00 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:00.097183329Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=79911499c8fe6a0b traceID=4c0206d04cecb99a7480c331c5b6cc0c
Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.180905355Z" level=info msg="ignoring event" container=a27ef64abfb24c0f9177fca73fcdc9d1332c537d5cd6a37974dff83d28718954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.428743748Z" level=info msg="ignoring event" container=6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.487508948Z" level=info msg="ignoring event" container=5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.563447368Z" level=info msg="ignoring event" container=dce94edaac03bb3e640dafe4ce1ba4c623b72547b4b00e000ee2e5a7a011718a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 23 23:50:22 ubuntu-20-agent-2 dockerd[18649]: time="2024-09-23T23:50:22.643992334Z" level=info msg="ignoring event" container=60418f53397fe1557c1cf552ba01c487ad00bab55167dadb9d28aecdcee36ee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
83dfa51e56c75 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 5 minutes ago Exited gadget 6 3e8853cc705e2 gadget-8z8sg
ba2c3bb63a208 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 4803c0b83d910 gcp-auth-89d5ffd79-4blxd
f0fb36c0c6ad4 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 5597a5300ac16 csi-hostpathplugin-h6hck
6ed9ccd8da247 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 5597a5300ac16 csi-hostpathplugin-h6hck
6d7f7d70cac5a registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 5597a5300ac16 csi-hostpathplugin-h6hck
088fd20c2fd81 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 5597a5300ac16 csi-hostpathplugin-h6hck
2742add6570d1 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 5597a5300ac16 csi-hostpathplugin-h6hck
4e0081f0d620d registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 f054130d78248 csi-hostpath-attacher-0
7dcaea766ee6d registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 7b7566773ebcf csi-hostpath-resizer-0
27bf14a8312f3 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 5597a5300ac16 csi-hostpathplugin-h6hck
a61ab9d35c13e registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 2d8bdd3237cac snapshot-controller-56fcc65765-5hqd5
4e35ab28cefbb registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 d3ff3d7efad58 snapshot-controller-56fcc65765-68n75
583b4557dd1dd rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 8b9bffc240e92 local-path-provisioner-86d989889c-sc47k
a791577cca5a2 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 f063f26a4f75b yakd-dashboard-67d98fc6b-w54xd
9d38b2469ff46 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 11 minutes ago Running metrics-server 0 f26ed69a5918e metrics-server-84c5f94fbc-kfb6d
a943a4924081c gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 11 minutes ago Running cloud-spanner-emulator 0 413e871ef3e48 cloud-spanner-emulator-5b584cc74-th77b
b503b99111eed nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 e59695eca1734 nvidia-device-plugin-daemonset-2wnr8
73cd2003c8e68 c69fa2e9cbf5f 11 minutes ago Running coredns 0 402bc2953059c coredns-7c65d6cfc9-48st5
76b81b284c2d7 6e38f40d628db 11 minutes ago Running storage-provisioner 0 bacf4d8ae21fe storage-provisioner
dd68ad72ce52b 60c005f310ff3 11 minutes ago Running kube-proxy 0 c9c13e4edaac6 kube-proxy-k9p26
b90fe5d50acd6 2e96e5913fc06 11 minutes ago Running etcd 0 b92b01e542fa3 etcd-ubuntu-20-agent-2
7e500a67c4d3d 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 c099feaad67e2 kube-controller-manager-ubuntu-20-agent-2
b014ea59d4af9 6bab7719df100 11 minutes ago Running kube-apiserver 0 b82d91d215f9a kube-apiserver-ubuntu-20-agent-2
64eba13525584 9aa1fad941575 11 minutes ago Running kube-scheduler 0 80854062edf18 kube-scheduler-ubuntu-20-agent-2
==> coredns [73cd2003c8e6] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:43477 - 59171 "HINFO IN 4658389154335151315.896572764661331685. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.203543242s
[INFO] 10.244.0.24:48428 - 48264 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000296247s
[INFO] 10.244.0.24:41913 - 5315 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000370033s
[INFO] 10.244.0.24:50818 - 8056 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100574s
[INFO] 10.244.0.24:51596 - 47262 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137289s
[INFO] 10.244.0.24:42468 - 3678 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130623s
[INFO] 10.244.0.24:52371 - 37731 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149165s
[INFO] 10.244.0.24:36120 - 38242 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003491113s
[INFO] 10.244.0.24:45515 - 3155 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004191745s
[INFO] 10.244.0.24:47392 - 27648 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002971199s
[INFO] 10.244.0.24:39136 - 19472 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003506534s
[INFO] 10.244.0.24:38852 - 12519 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002973071s
[INFO] 10.244.0.24:35810 - 41647 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003102396s
[INFO] 10.244.0.24:50527 - 178 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002117937s
[INFO] 10.244.0.24:34441 - 42306 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002729227s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_23T23_39_02_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 23 Sep 2024 23:38:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Mon, 23 Sep 2024 23:50:16 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 23 Sep 2024 23:46:11 +0000 Mon, 23 Sep 2024 23:38:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Sep 2024 23:46:11 +0000 Mon, 23 Sep 2024 23:38:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 23 Sep 2024 23:46:11 +0000 Mon, 23 Sep 2024 23:38:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 23 Sep 2024 23:46:11 +0000 Mon, 23 Sep 2024 23:38:59 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859304Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: 38b63acc-66f8-4c7e-8578-c838561f2860
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default cloud-spanner-emulator-5b584cc74-th77b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-8z8sg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-4blxd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-48st5 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-h6hck 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-k9p26 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-kfb6d 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-2wnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-5hqd5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-68n75 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-sc47k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-w54xd 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 23 f2 19 24 ac 08 06
[ +1.050755] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 3d 7e 5a ac 7c 08 06
[ +0.013452] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 6a cb ea 95 b1 08 06
[ +2.558803] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ad c2 43 0b d5 08 06
[ +1.672150] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ae e6 39 2a 23 d6 08 06
[ +1.880477] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 fc 2b 31 d0 80 08 06
[ +4.877626] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff da c3 da f9 9c 27 08 06
[ +0.139606] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 fd 60 1e e2 c6 08 06
[ +0.440564] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a a7 bf ff 27 58 08 06
[Sep23 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 e4 a9 71 3c ed 08 06
[ +0.097865] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 02 f9 bf 0e c3 08 06
[ +10.876730] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 1f fc 93 70 85 08 06
[ +0.000480] IPv4: martian source 10.244.0.24 from 10.244.0.6, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 d5 1e 0d 11 a7 08 06
==> etcd [b90fe5d50acd] <==
{"level":"info","ts":"2024-09-23T23:38:58.583629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-23T23:38:58.583656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-09-23T23:38:58.583669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-09-23T23:38:58.583680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-23T23:38:58.583689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-23T23:38:58.583714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-23T23:38:58.584779Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-23T23:38:58.584776Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:58.584814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T23:38:58.584845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-23T23:38:58.584982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-23T23:38:58.585012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-23T23:38:58.585539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:58.585892Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:58.585961Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-23T23:38:58.587279Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T23:38:58.587395Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-23T23:38:58.588492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-23T23:38:58.588782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-09-23T23:39:14.105960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.013975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-23T23:39:14.106034Z","caller":"traceutil/trace.go:171","msg":"trace[995397433] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:815; }","duration":"124.13314ms","start":"2024-09-23T23:39:13.981888Z","end":"2024-09-23T23:39:14.106021Z","steps":["trace[995397433] 'range keys from in-memory index tree' (duration: 123.943764ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T23:39:14.481304Z","caller":"traceutil/trace.go:171","msg":"trace[1093074071] transaction","detail":"{read_only:false; response_revision:816; number_of_response:1; }","duration":"109.581542ms","start":"2024-09-23T23:39:14.371708Z","end":"2024-09-23T23:39:14.481290Z","steps":["trace[1093074071] 'process raft request' (duration: 109.483898ms)"],"step_count":1}
{"level":"info","ts":"2024-09-23T23:48:58.602336Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1686}
{"level":"info","ts":"2024-09-23T23:48:58.626172Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1686,"took":"23.369626ms","hash":3865861227,"current-db-size-bytes":8175616,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4337664,"current-db-size-in-use":"4.3 MB"}
{"level":"info","ts":"2024-09-23T23:48:58.626214Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3865861227,"revision":1686,"compact-revision":-1}
==> gcp-auth [ba2c3bb63a20] <==
2024/09/23 23:40:30 GCP Auth Webhook started!
2024/09/23 23:40:47 Ready to marshal response ...
2024/09/23 23:40:47 Ready to write response ...
2024/09/23 23:40:48 Ready to marshal response ...
2024/09/23 23:40:48 Ready to write response ...
2024/09/23 23:41:09 Ready to marshal response ...
2024/09/23 23:41:09 Ready to write response ...
2024/09/23 23:41:09 Ready to marshal response ...
2024/09/23 23:41:09 Ready to write response ...
2024/09/23 23:41:10 Ready to marshal response ...
2024/09/23 23:41:10 Ready to write response ...
2024/09/23 23:49:22 Ready to marshal response ...
2024/09/23 23:49:22 Ready to write response ...
==> kernel <==
23:50:23 up 32 min, 0 users, load average: 0.31, 0.37, 0.33
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [b014ea59d4af] <==
W0923 23:39:48.845664 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.177.38:443: connect: connection refused
W0923 23:39:55.715515 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
E0923 23:39:55.715548 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
W0923 23:40:17.725452 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
E0923 23:40:17.725493 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
W0923 23:40:17.738595 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.61.13:443: connect: connection refused
E0923 23:40:17.738636 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.61.13:443: connect: connection refused" logger="UnhandledError"
I0923 23:40:47.526712 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0923 23:40:47.542650 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0923 23:40:59.911435 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0923 23:40:59.925747 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0923 23:41:00.028526 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 23:41:00.055861 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 23:41:00.055913 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0923 23:41:00.064147 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0923 23:41:00.192022 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 23:41:00.205463 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0923 23:41:00.256061 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0923 23:41:00.969621 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0923 23:41:01.070881 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0923 23:41:01.081631 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0923 23:41:01.097418 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0923 23:41:01.256935 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0923 23:41:01.302930 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0923 23:41:01.450389 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [7e500a67c4d3] <==
W0923 23:48:59.530622 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:48:59.530662 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:10.809457 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:10.809496 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:16.655560 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:16.655604 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:18.335185 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:18.335224 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:25.503907 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:25.503972 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:34.089319 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:34.089362 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:42.072793 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:42.072830 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:43.683958 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:43.683996 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:49:49.657904 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:49:49.657947 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:50:06.095992 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:50:06.096033 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:50:07.830773 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:50:07.830812 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0923 23:50:17.517197 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0923 23:50:17.517237 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0923 23:50:22.394710 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.181µs"
==> kube-proxy [dd68ad72ce52] <==
I0923 23:39:08.551651 1 server_linux.go:66] "Using iptables proxy"
I0923 23:39:08.716242 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0923 23:39:08.716314 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0923 23:39:08.812648 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0923 23:39:08.812729 1 server_linux.go:169] "Using iptables Proxier"
I0923 23:39:08.821062 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0923 23:39:08.821382 1 server.go:483] "Version info" version="v1.31.1"
I0923 23:39:08.821403 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0923 23:39:08.827527 1 config.go:199] "Starting service config controller"
I0923 23:39:08.827543 1 shared_informer.go:313] Waiting for caches to sync for service config
I0923 23:39:08.827573 1 config.go:105] "Starting endpoint slice config controller"
I0923 23:39:08.827579 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0923 23:39:08.828068 1 config.go:328] "Starting node config controller"
I0923 23:39:08.828077 1 shared_informer.go:313] Waiting for caches to sync for node config
I0923 23:39:08.930364 1 shared_informer.go:320] Caches are synced for node config
I0923 23:39:08.930404 1 shared_informer.go:320] Caches are synced for service config
I0923 23:39:08.930456 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [64eba1352558] <==
E0923 23:38:59.735444 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.735500 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 23:38:59.735534 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
E0923 23:38:59.734654 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.735630 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:59.735661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.735734 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0923 23:38:59.735802 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0923 23:38:59.735829 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
E0923 23:38:59.735764 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.735961 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0923 23:38:59.735985 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.736166 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0923 23:38:59.736188 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.736415 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0923 23:38:59.736426 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:59.736439 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0923 23:38:59.736450 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0923 23:38:59.736449 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0923 23:38:59.736468 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0923 23:39:00.600063 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0923 23:39:00.600119 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0923 23:39:00.607286 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0923 23:39:00.607319 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0923 23:39:01.333868 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Thu 2024-08-15 05:18:14 UTC, end at Mon 2024-09-23 23:50:23 UTC. --
Sep 23 23:50:04 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:04.952768 19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="375e9c91-f2dd-4d52-a086-6895e79b1d1e"
Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:13.951497 19859 scope.go:117] "RemoveContainer" containerID="83dfa51e56c75dfb4b0faefa8ea4ceae7d1479e97caeb54566a9ddce5e26bf57"
Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:13.951709 19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-8z8sg_gadget(d327e609-6b19-4431-b38f-9029fefa34a3)\"" pod="gadget/gadget-8z8sg" podUID="d327e609-6b19-4431-b38f-9029fefa34a3"
Sep 23 23:50:13 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:13.953344 19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="977ee237-4755-42a9-bdfb-c1d58f1158cf"
Sep 23 23:50:17 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:17.953270 19859 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="375e9c91-f2dd-4d52-a086-6895e79b1d1e"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381606 19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds\") pod \"977ee237-4755-42a9-bdfb-c1d58f1158cf\" (UID: \"977ee237-4755-42a9-bdfb-c1d58f1158cf\") "
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381661 19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkdpm\" (UniqueName: \"kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm\") pod \"977ee237-4755-42a9-bdfb-c1d58f1158cf\" (UID: \"977ee237-4755-42a9-bdfb-c1d58f1158cf\") "
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.381717 19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "977ee237-4755-42a9-bdfb-c1d58f1158cf" (UID: "977ee237-4755-42a9-bdfb-c1d58f1158cf"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.384128 19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm" (OuterVolumeSpecName: "kube-api-access-hkdpm") pod "977ee237-4755-42a9-bdfb-c1d58f1158cf" (UID: "977ee237-4755-42a9-bdfb-c1d58f1158cf"). InnerVolumeSpecName "kube-api-access-hkdpm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.481964 19859 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/977ee237-4755-42a9-bdfb-c1d58f1158cf-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.482009 19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hkdpm\" (UniqueName: \"kubernetes.io/projected/977ee237-4755-42a9-bdfb-c1d58f1158cf-kube-api-access-hkdpm\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.784279 19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md6f6\" (UniqueName: \"kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6\") pod \"1fd26fe1-569a-41d8-bd27-41ea6d31c232\" (UID: \"1fd26fe1-569a-41d8-bd27-41ea6d31c232\") "
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.786480 19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6" (OuterVolumeSpecName: "kube-api-access-md6f6") pod "1fd26fe1-569a-41d8-bd27-41ea6d31c232" (UID: "1fd26fe1-569a-41d8-bd27-41ea6d31c232"). InnerVolumeSpecName "kube-api-access-md6f6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.840341 19859 scope.go:117] "RemoveContainer" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.857549 19859 scope.go:117] "RemoveContainer" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:22.858515 19859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760" containerID="5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.858567 19859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"} err="failed to get container status \"5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760\": rpc error: code = Unknown desc = Error response from daemon: No such container: 5455350084a924c76490ec577c3e5d00e1e7f1f69a3a57c057a4bf455f3e6760"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.858597 19859 scope.go:117] "RemoveContainer" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.875825 19859 scope.go:117] "RemoveContainer" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: E0923 23:50:22.876654 19859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89" containerID="6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.876698 19859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"} err="failed to get container status \"6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89\": rpc error: code = Unknown desc = Error response from daemon: No such container: 6cb5869ba4a86a9a6f5e9e71846376abc36a234efcdbdf612ae31cb31de43c89"
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.885040 19859 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lkw\" (UniqueName: \"kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw\") pod \"b1bc2a37-dafc-48f7-94a2-b80e57e12b9a\" (UID: \"b1bc2a37-dafc-48f7-94a2-b80e57e12b9a\") "
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.885118 19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-md6f6\" (UniqueName: \"kubernetes.io/projected/1fd26fe1-569a-41d8-bd27-41ea6d31c232-kube-api-access-md6f6\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.886921 19859 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw" (OuterVolumeSpecName: "kube-api-access-f8lkw") pod "b1bc2a37-dafc-48f7-94a2-b80e57e12b9a" (UID: "b1bc2a37-dafc-48f7-94a2-b80e57e12b9a"). InnerVolumeSpecName "kube-api-access-f8lkw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 23 23:50:22 ubuntu-20-agent-2 kubelet[19859]: I0923 23:50:22.985947 19859 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f8lkw\" (UniqueName: \"kubernetes.io/projected/b1bc2a37-dafc-48f7-94a2-b80e57e12b9a-kube-api-access-f8lkw\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
==> storage-provisioner [76b81b284c2d] <==
I0923 23:39:09.020188 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0923 23:39:09.034064 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0923 23:39:09.034109 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0923 23:39:09.042235 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0923 23:39:09.042398 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260!
I0923 23:39:09.046263 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08d74b23-1e4a-4198-9169-fe387f5c40cf", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260 became leader
I0923 23:39:09.142890 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_55db1eb7-47c2-43d6-b4c3-9de0248b2260!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Mon, 23 Sep 2024 23:41:09 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.26
IPs:
IP: 10.244.0.26
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hq8sm (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hq8sm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m41s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m40s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m40s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m27s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m8s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.75s)