=== RUN TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.685588ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003309684s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003533863s
addons_test.go:342: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080478004s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/13 23:39:11 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43107 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:27 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:29 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 13 Sep 24 23:29 UTC | 13 Sep 24 23:29 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/13 23:27:40
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0913 23:27:40.042854 17442 out.go:345] Setting OutFile to fd 1 ...
I0913 23:27:40.043112 17442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:27:40.043121 17442 out.go:358] Setting ErrFile to fd 2...
I0913 23:27:40.043125 17442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:27:40.043329 17442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
I0913 23:27:40.043997 17442 out.go:352] Setting JSON to false
I0913 23:27:40.044804 17442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":612,"bootTime":1726269448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0913 23:27:40.044896 17442 start.go:139] virtualization: kvm guest
I0913 23:27:40.047207 17442 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0913 23:27:40.048483 17442 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
I0913 23:27:40.048520 17442 notify.go:220] Checking for updates...
I0913 23:27:40.048544 17442 out.go:177] - MINIKUBE_LOCATION=19640
I0913 23:27:40.049780 17442 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0913 23:27:40.051042 17442 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
I0913 23:27:40.052443 17442 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
I0913 23:27:40.053770 17442 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0913 23:27:40.055051 17442 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0913 23:27:40.056493 17442 driver.go:394] Setting default libvirt URI to qemu:///system
I0913 23:27:40.065713 17442 out.go:177] * Using the none driver based on user configuration
I0913 23:27:40.066841 17442 start.go:297] selected driver: none
I0913 23:27:40.066861 17442 start.go:901] validating driver "none" against <nil>
I0913 23:27:40.066876 17442 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0913 23:27:40.066932 17442 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0913 23:27:40.067318 17442 out.go:270] ! The 'none' driver does not respect the --memory flag
I0913 23:27:40.068029 17442 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0913 23:27:40.068282 17442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0913 23:27:40.068320 17442 cni.go:84] Creating CNI manager for ""
I0913 23:27:40.068369 17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 23:27:40.068379 17442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0913 23:27:40.068431 17442 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0913 23:27:40.069834 17442 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0913 23:27:40.071136 17442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json ...
I0913 23:27:40.071169 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json: {Name:mk8b3ce9215b269a8298bd5f636510296ec26782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:40.071292 17442 start.go:360] acquireMachinesLock for minikube: {Name:mkd91394d598fa6821cf9805e599aa6da131df53 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 23:27:40.071337 17442 start.go:364] duration metric: took 30.169µs to acquireMachinesLock for "minikube"
I0913 23:27:40.071353 17442 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0913 23:27:40.071486 17442 start.go:125] createHost starting for "" (driver="none")
I0913 23:27:40.072849 17442 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0913 23:27:40.074029 17442 exec_runner.go:51] Run: systemctl --version
I0913 23:27:40.076863 17442 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0913 23:27:40.076887 17442 client.go:168] LocalClient.Create starting
I0913 23:27:40.076926 17442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca.pem
I0913 23:27:40.076950 17442 main.go:141] libmachine: Decoding PEM data...
I0913 23:27:40.076963 17442 main.go:141] libmachine: Parsing certificate...
I0913 23:27:40.076998 17442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5268/.minikube/certs/cert.pem
I0913 23:27:40.077023 17442 main.go:141] libmachine: Decoding PEM data...
I0913 23:27:40.077031 17442 main.go:141] libmachine: Parsing certificate...
I0913 23:27:40.077313 17442 client.go:171] duration metric: took 419.989µs to LocalClient.Create
I0913 23:27:40.077341 17442 start.go:167] duration metric: took 480.068µs to libmachine.API.Create "minikube"
I0913 23:27:40.077346 17442 start.go:293] postStartSetup for "minikube" (driver="none")
I0913 23:27:40.077392 17442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0913 23:27:40.077427 17442 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0913 23:27:40.085405 17442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0913 23:27:40.085423 17442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0913 23:27:40.085431 17442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0913 23:27:40.087097 17442 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0913 23:27:40.088124 17442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5268/.minikube/addons for local assets ...
I0913 23:27:40.088164 17442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5268/.minikube/files for local assets ...
I0913 23:27:40.088186 17442 start.go:296] duration metric: took 10.835277ms for postStartSetup
I0913 23:27:40.088775 17442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json ...
I0913 23:27:40.088900 17442 start.go:128] duration metric: took 17.405109ms to createHost
I0913 23:27:40.088911 17442 start.go:83] releasing machines lock for "minikube", held for 17.565934ms
I0913 23:27:40.089278 17442 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0913 23:27:40.089369 17442 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0913 23:27:40.091967 17442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0913 23:27:40.092008 17442 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0913 23:27:40.101098 17442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0913 23:27:40.101140 17442 start.go:495] detecting cgroup driver to use...
I0913 23:27:40.101173 17442 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0913 23:27:40.101295 17442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0913 23:27:40.117356 17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0913 23:27:40.126064 17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0913 23:27:40.133958 17442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0913 23:27:40.134002 17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0913 23:27:40.143256 17442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0913 23:27:40.152284 17442 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0913 23:27:40.160537 17442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0913 23:27:40.169471 17442 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0913 23:27:40.177164 17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0913 23:27:40.185289 17442 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0913 23:27:40.193354 17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0913 23:27:40.202659 17442 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0913 23:27:40.209921 17442 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0913 23:27:40.217004 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:40.430326 17442 exec_runner.go:51] Run: sudo systemctl restart containerd
I0913 23:27:40.494583 17442 start.go:495] detecting cgroup driver to use...
I0913 23:27:40.494625 17442 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0913 23:27:40.494724 17442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0913 23:27:40.513873 17442 exec_runner.go:51] Run: which cri-dockerd
I0913 23:27:40.514771 17442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0913 23:27:40.521993 17442 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0913 23:27:40.522012 17442 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 23:27:40.522048 17442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 23:27:40.529615 17442 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0913 23:27:40.529746 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2101396467 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0913 23:27:40.537035 17442 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0913 23:27:40.764653 17442 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0913 23:27:40.982299 17442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0913 23:27:40.982472 17442 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0913 23:27:40.982488 17442 exec_runner.go:203] rm: /etc/docker/daemon.json
I0913 23:27:40.982531 17442 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0913 23:27:40.990782 17442 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0913 23:27:40.990911 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2963141349 /etc/docker/daemon.json
I0913 23:27:40.998434 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:41.234618 17442 exec_runner.go:51] Run: sudo systemctl restart docker
I0913 23:27:41.526286 17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0913 23:27:41.536959 17442 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0913 23:27:41.552568 17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0913 23:27:41.563087 17442 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0913 23:27:41.802535 17442 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0913 23:27:42.022296 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:42.254992 17442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0913 23:27:42.268385 17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0913 23:27:42.278461 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:42.500889 17442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0913 23:27:42.567208 17442 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0913 23:27:42.567285 17442 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0913 23:27:42.568620 17442 start.go:563] Will wait 60s for crictl version
I0913 23:27:42.568667 17442 exec_runner.go:51] Run: which crictl
I0913 23:27:42.569465 17442 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0913 23:27:42.598370 17442 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0913 23:27:42.598423 17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 23:27:42.618154 17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 23:27:42.639258 17442 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0913 23:27:42.639338 17442 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0913 23:27:42.641934 17442 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0913 23:27:42.642988 17442 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0913 23:27:42.643093 17442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0913 23:27:42.643103 17442 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0913 23:27:42.643187 17442 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0913 23:27:42.643224 17442 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0913 23:27:42.691163 17442 cni.go:84] Creating CNI manager for ""
I0913 23:27:42.691189 17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 23:27:42.691198 17442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0913 23:27:42.691217 17442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0913 23:27:42.691358 17442 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0913 23:27:42.691410 17442 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0913 23:27:42.699298 17442 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0913 23:27:42.699342 17442 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0913 23:27:42.707274 17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0913 23:27:42.707274 17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0913 23:27:42.707335 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0913 23:27:42.707367 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0913 23:27:42.707306 17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0913 23:27:42.707447 17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0913 23:27:42.718092 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0913 23:27:42.758021 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3450046635 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0913 23:27:42.764538 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1427354373 /var/lib/minikube/binaries/v1.31.1/kubectl
I0913 23:27:42.791232 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3399511020 /var/lib/minikube/binaries/v1.31.1/kubelet
I0913 23:27:42.854756 17442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0913 23:27:42.863665 17442 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0913 23:27:42.863684 17442 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 23:27:42.863728 17442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 23:27:42.870791 17442 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0913 23:27:42.870961 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube944603868 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0913 23:27:42.878197 17442 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0913 23:27:42.878213 17442 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0913 23:27:42.878241 17442 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0913 23:27:42.885374 17442 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0913 23:27:42.885482 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1796042159 /lib/systemd/system/kubelet.service
I0913 23:27:42.892519 17442 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0913 23:27:42.892609 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube593589632 /var/tmp/minikube/kubeadm.yaml.new
I0913 23:27:42.899693 17442 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0913 23:27:42.900872 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:43.132492 17442 exec_runner.go:51] Run: sudo systemctl start kubelet
I0913 23:27:43.147234 17442 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube for IP: 10.138.0.48
I0913 23:27:43.147257 17442 certs.go:194] generating shared ca certs ...
I0913 23:27:43.147273 17442 certs.go:226] acquiring lock for ca certs: {Name:mkaa457a5d672eb517f9ab47f243967501ec8a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.147407 17442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.key
I0913 23:27:43.147446 17442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.key
I0913 23:27:43.147455 17442 certs.go:256] generating profile certs ...
I0913 23:27:43.147502 17442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key
I0913 23:27:43.147520 17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt with IP's: []
I0913 23:27:43.256320 17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt ...
I0913 23:27:43.256352 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt: {Name:mkaa9242556f51dab016296e8ae9815dcc02adf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.256488 17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key ...
I0913 23:27:43.256499 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key: {Name:mk35908491563b31a3f070a6b55036665c26a19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.256559 17442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a
I0913 23:27:43.256573 17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0913 23:27:43.508037 17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0913 23:27:43.508065 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk4d1294ccf6a0b4ab4193fa085c9a0d57eb998f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.508187 17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0913 23:27:43.508197 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk3ab7673b84dc98fab10aedb75cee3ee57a3d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.508247 17442 certs.go:381] copying /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt
I0913 23:27:43.508388 17442 certs.go:385] copying /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key
I0913 23:27:43.508447 17442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key
I0913 23:27:43.508461 17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0913 23:27:43.639801 17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt ...
I0913 23:27:43.639830 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7c623c898122472e947fba33dd326e8ea2112d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.639957 17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key ...
I0913 23:27:43.639968 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key: {Name:mkc79bbfb230eb24f3e64926e49c71bfab00a0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:43.640117 17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca-key.pem (1675 bytes)
I0913 23:27:43.640148 17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca.pem (1078 bytes)
I0913 23:27:43.640170 17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/cert.pem (1123 bytes)
I0913 23:27:43.640194 17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/key.pem (1675 bytes)
I0913 23:27:43.640796 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0913 23:27:43.640904 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1411481762 /var/lib/minikube/certs/ca.crt
I0913 23:27:43.650220 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0913 23:27:43.650334 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2872038555 /var/lib/minikube/certs/ca.key
I0913 23:27:43.657763 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0913 23:27:43.657860 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4040138273 /var/lib/minikube/certs/proxy-client-ca.crt
I0913 23:27:43.665078 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0913 23:27:43.665170 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube743613709 /var/lib/minikube/certs/proxy-client-ca.key
I0913 23:27:43.672503 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0913 23:27:43.672614 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube841865764 /var/lib/minikube/certs/apiserver.crt
I0913 23:27:43.679833 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0913 23:27:43.679931 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube570108513 /var/lib/minikube/certs/apiserver.key
I0913 23:27:43.687396 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0913 23:27:43.687610 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube407869226 /var/lib/minikube/certs/proxy-client.crt
I0913 23:27:43.695364 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0913 23:27:43.695481 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207609115 /var/lib/minikube/certs/proxy-client.key
I0913 23:27:43.703373 17442 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0913 23:27:43.703390 17442 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.703418 17442 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.710578 17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0913 23:27:43.710696 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1558285643 /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.718576 17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0913 23:27:43.718677 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube564877468 /var/lib/minikube/kubeconfig
I0913 23:27:43.725997 17442 exec_runner.go:51] Run: openssl version
I0913 23:27:43.728874 17442 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0913 23:27:43.736770 17442 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.737944 17442 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.737981 17442 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0913 23:27:43.740633 17442 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0913 23:27:43.748077 17442 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0913 23:27:43.749145 17442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0913 23:27:43.749176 17442 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0913 23:27:43.749263 17442 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0913 23:27:43.764071 17442 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0913 23:27:43.771942 17442 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0913 23:27:43.779598 17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0913 23:27:43.797464 17442 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0913 23:27:43.804972 17442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0913 23:27:43.804989 17442 kubeadm.go:157] found existing configuration files:
I0913 23:27:43.805030 17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0913 23:27:43.812356 17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0913 23:27:43.812400 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0913 23:27:43.819325 17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0913 23:27:43.826625 17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0913 23:27:43.826667 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0913 23:27:43.833477 17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0913 23:27:43.840625 17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0913 23:27:43.840664 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0913 23:27:43.847528 17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0913 23:27:43.854259 17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0913 23:27:43.854294 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0913 23:27:43.860693 17442 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0913 23:27:43.891565 17442 kubeadm.go:310] W0913 23:27:43.891452 18319 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0913 23:27:43.892159 17442 kubeadm.go:310] W0913 23:27:43.892110 18319 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0913 23:27:43.893675 17442 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0913 23:27:43.893728 17442 kubeadm.go:310] [preflight] Running pre-flight checks
I0913 23:27:43.992568 17442 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0913 23:27:43.992608 17442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0913 23:27:43.992613 17442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0913 23:27:43.992618 17442 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0913 23:27:44.003644 17442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0913 23:27:44.007235 17442 out.go:235] - Generating certificates and keys ...
I0913 23:27:44.007278 17442 kubeadm.go:310] [certs] Using existing ca certificate authority
I0913 23:27:44.007292 17442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0913 23:27:44.075670 17442 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0913 23:27:44.303936 17442 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0913 23:27:44.498053 17442 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0913 23:27:44.600385 17442 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0913 23:27:44.743923 17442 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0913 23:27:44.744037 17442 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0913 23:27:45.081650 17442 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0913 23:27:45.081790 17442 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0913 23:27:45.206908 17442 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0913 23:27:45.357294 17442 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0913 23:27:45.545262 17442 kubeadm.go:310] [certs] Generating "sa" key and public key
I0913 23:27:45.545446 17442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0913 23:27:45.610682 17442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0913 23:27:46.117404 17442 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0913 23:27:46.441754 17442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0913 23:27:46.568612 17442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0913 23:27:46.735805 17442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0913 23:27:46.736389 17442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0913 23:27:46.739735 17442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0913 23:27:46.741773 17442 out.go:235] - Booting up control plane ...
I0913 23:27:46.741794 17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0913 23:27:46.741813 17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0913 23:27:46.741820 17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0913 23:27:46.762007 17442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0913 23:27:46.766143 17442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0913 23:27:46.766204 17442 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0913 23:27:46.998267 17442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0913 23:27:46.998288 17442 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0913 23:27:47.499821 17442 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.52557ms
I0913 23:27:47.499844 17442 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0913 23:27:51.501297 17442 kubeadm.go:310] [api-check] The API server is healthy after 4.001474093s
I0913 23:27:51.511992 17442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0913 23:27:51.520723 17442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0913 23:27:51.536233 17442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0913 23:27:51.536255 17442 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0913 23:27:51.542357 17442 kubeadm.go:310] [bootstrap-token] Using token: k48vab.7dyg6bcsudjwhanl
I0913 23:27:51.543547 17442 out.go:235] - Configuring RBAC rules ...
I0913 23:27:51.543573 17442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0913 23:27:51.546266 17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0913 23:27:51.551401 17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0913 23:27:51.553684 17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0913 23:27:51.555858 17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0913 23:27:51.557926 17442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0913 23:27:51.908217 17442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0913 23:27:52.330049 17442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0913 23:27:52.907307 17442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0913 23:27:52.908199 17442 kubeadm.go:310]
I0913 23:27:52.908211 17442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0913 23:27:52.908215 17442 kubeadm.go:310]
I0913 23:27:52.908219 17442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0913 23:27:52.908223 17442 kubeadm.go:310]
I0913 23:27:52.908228 17442 kubeadm.go:310] mkdir -p $HOME/.kube
I0913 23:27:52.908232 17442 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0913 23:27:52.908236 17442 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0913 23:27:52.908239 17442 kubeadm.go:310]
I0913 23:27:52.908242 17442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0913 23:27:52.908246 17442 kubeadm.go:310]
I0913 23:27:52.908249 17442 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0913 23:27:52.908253 17442 kubeadm.go:310]
I0913 23:27:52.908256 17442 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0913 23:27:52.908260 17442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0913 23:27:52.908264 17442 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0913 23:27:52.908273 17442 kubeadm.go:310]
I0913 23:27:52.908277 17442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0913 23:27:52.908281 17442 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0913 23:27:52.908284 17442 kubeadm.go:310]
I0913 23:27:52.908288 17442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k48vab.7dyg6bcsudjwhanl \
I0913 23:27:52.908292 17442 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a1f628722583726a6fb852629b3c574616ae97686c64770f04122966ec038809 \
I0913 23:27:52.908296 17442 kubeadm.go:310] --control-plane
I0913 23:27:52.908300 17442 kubeadm.go:310]
I0913 23:27:52.908304 17442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0913 23:27:52.908307 17442 kubeadm.go:310]
I0913 23:27:52.908311 17442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k48vab.7dyg6bcsudjwhanl \
I0913 23:27:52.908316 17442 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a1f628722583726a6fb852629b3c574616ae97686c64770f04122966ec038809
I0913 23:27:52.911005 17442 cni.go:84] Creating CNI manager for ""
I0913 23:27:52.911037 17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0913 23:27:52.912748 17442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0913 23:27:52.913883 17442 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0913 23:27:52.923598 17442 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0913 23:27:52.923732 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube874528224 /etc/cni/net.d/1-k8s.conflist
I0913 23:27:52.932495 17442 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0913 23:27:52.932567 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:52.932615 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_13T23_27_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0913 23:27:52.941587 17442 ops.go:34] apiserver oom_adj: -16
I0913 23:27:53.009149 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:53.510027 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:54.009720 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:54.509811 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:55.009502 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:55.510251 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:56.010033 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:56.510046 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:57.009839 17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0913 23:27:57.071893 17442 kubeadm.go:1113] duration metric: took 4.139377916s to wait for elevateKubeSystemPrivileges
I0913 23:27:57.071932 17442 kubeadm.go:394] duration metric: took 13.322756725s to StartCluster
I0913 23:27:57.071956 17442 settings.go:142] acquiring lock: {Name:mk426e1dc1c608a4bb98bf9e7b79b1ed45a6c0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:57.072030 17442 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19640-5268/kubeconfig
I0913 23:27:57.072657 17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/kubeconfig: {Name:mk1b3a366c7b53fc601c2b9d45908b04288a17e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0913 23:27:57.072898 17442 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0913 23:27:57.072886 17442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0913 23:27:57.073016 17442 addons.go:69] Setting yakd=true in profile "minikube"
I0913 23:27:57.073024 17442 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0913 23:27:57.073036 17442 addons.go:234] Setting addon yakd=true in "minikube"
I0913 23:27:57.073045 17442 mustload.go:65] Loading cluster: minikube
I0913 23:27:57.073070 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.073087 17442 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0913 23:27:57.073081 17442 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0913 23:27:57.073105 17442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0913 23:27:57.073120 17442 addons.go:69] Setting registry=true in profile "minikube"
I0913 23:27:57.073123 17442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:27:57.073130 17442 addons.go:234] Setting addon registry=true in "minikube"
I0913 23:27:57.073163 17442 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0913 23:27:57.073168 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.073179 17442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0913 23:27:57.073248 17442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:27:57.073640 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.073657 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.073682 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.073697 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.073708 17442 addons.go:69] Setting helm-tiller=true in profile "minikube"
I0913 23:27:57.073710 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.073719 17442 addons.go:234] Setting addon helm-tiller=true in "minikube"
I0913 23:27:57.073725 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.073736 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.073740 17442 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0913 23:27:57.073761 17442 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0913 23:27:57.073766 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.073774 17442 addons.go:69] Setting volcano=true in profile "minikube"
I0913 23:27:57.073786 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.073792 17442 addons.go:234] Setting addon volcano=true in "minikube"
I0913 23:27:57.073815 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.074243 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.074262 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.074294 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.073699 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.074365 17442 out.go:177] * Configuring local host environment ...
I0913 23:27:57.073762 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.074415 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.074421 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.074451 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.074502 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.074515 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.074545 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.073107 17442 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0913 23:27:57.074684 17442 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0913 23:27:57.074679 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.074702 17442 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0913 23:27:57.074726 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.075333 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.075340 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.075348 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.075353 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.075376 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.075383 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.074363 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.075944 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.075977 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0913 23:27:57.076988 17442 out.go:270] *
W0913 23:27:57.077012 17442 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0913 23:27:57.077020 17442 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0913 23:27:57.077027 17442 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0913 23:27:57.077074 17442 out.go:270] *
W0913 23:27:57.077137 17442 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0913 23:27:57.077192 17442 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0913 23:27:57.077220 17442 out.go:270] *
W0913 23:27:57.077267 17442 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0913 23:27:57.077306 17442 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0913 23:27:57.077337 17442 out.go:270] *
W0913 23:27:57.077364 17442 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0913 23:27:57.077419 17442 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0913 23:27:57.073727 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.077493 17442 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0913 23:27:57.077533 17442 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0913 23:27:57.077566 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.074390 17442 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0913 23:27:57.077750 17442 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0913 23:27:57.077782 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.078238 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.078283 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.078307 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.078326 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.078334 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.078363 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.074379 17442 addons.go:69] Setting metrics-server=true in profile "minikube"
I0913 23:27:57.078551 17442 addons.go:234] Setting addon metrics-server=true in "minikube"
I0913 23:27:57.078583 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.078516 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.078731 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.073063 17442 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0913 23:27:57.078814 17442 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0913 23:27:57.078858 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.079164 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.079188 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.079219 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.079487 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.079540 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.079581 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.080453 17442 out.go:177] * Verifying Kubernetes components...
I0913 23:27:57.084435 17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0913 23:27:57.097582 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.098468 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.098508 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.098470 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.102971 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.103263 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.106329 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.110192 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.113567 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.116966 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.116978 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.117726 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.119309 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.125740 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.125801 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.129543 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.129605 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.131088 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.131135 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.132646 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.132690 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.134792 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.134845 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.135471 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.135525 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.135731 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.135928 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.137670 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.137722 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.138977 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.139024 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.139279 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.139322 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.142607 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.144979 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.145025 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.149168 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.149234 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.157479 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.157505 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.158036 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.158162 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.158858 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.158879 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.161849 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.161922 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.166641 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.166669 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.170179 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.170211 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.170249 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.170261 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.170274 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.170286 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.170640 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.170847 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.171745 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.171779 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.172294 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.172318 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.172610 17442 out.go:177] - Using image docker.io/registry:2.8.3
I0913 23:27:57.172772 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.172829 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.172776 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.172984 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.173122 17442 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0913 23:27:57.173162 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.173854 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.175651 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.175669 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.175693 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.176543 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.177208 17442 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0913 23:27:57.177551 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.178541 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.177624 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.179438 17442 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0913 23:27:57.179479 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.179510 17442 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0913 23:27:57.179616 17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0913 23:27:57.179643 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0913 23:27:57.179821 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1248433547 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0913 23:27:57.179823 17442 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0913 23:27:57.180048 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:27:57.180058 17442 api_server.go:166] Checking apiserver status ...
I0913 23:27:57.180088 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:27:57.180243 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.180260 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:27:57.180777 17442 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0913 23:27:57.180814 17442 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0913 23:27:57.180871 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0913 23:27:57.180894 17442 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0913 23:27:57.180937 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32070201 /etc/kubernetes/addons/metrics-apiservice.yaml
I0913 23:27:57.181021 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube775660223 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0913 23:27:57.181544 17442 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0913 23:27:57.181571 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.181882 17442 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0913 23:27:57.182192 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.182542 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.182856 17442 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0913 23:27:57.182933 17442 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0913 23:27:57.183267 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0913 23:27:57.183371 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2192420989 /etc/kubernetes/addons/registry-rc.yaml
I0913 23:27:57.183695 17442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0913 23:27:57.183708 17442 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0913 23:27:57.183714 17442 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0913 23:27:57.183745 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0913 23:27:57.184237 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.184252 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.185429 17442 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0913 23:27:57.186716 17442 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0913 23:27:57.188209 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.188435 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.188455 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.194904 17442 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0913 23:27:57.195461 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.195641 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.195935 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.195952 17442 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0913 23:27:57.195976 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0913 23:27:57.196536 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4024711363 /etc/kubernetes/addons/volcano-deployment.yaml
I0913 23:27:57.197487 17442 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0913 23:27:57.197545 17442 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0913 23:27:57.200999 17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0913 23:27:57.197647 17442 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 23:27:57.202252 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0913 23:27:57.202412 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0913 23:27:57.203221 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3776574104 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0913 23:27:57.203450 17442 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0913 23:27:57.203601 17442 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0913 23:27:57.203625 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0913 23:27:57.203720 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2775994425 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 23:27:57.203724 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube186619044 /etc/kubernetes/addons/deployment.yaml
I0913 23:27:57.203940 17442 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0913 23:27:57.203959 17442 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0913 23:27:57.204060 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1844513379 /etc/kubernetes/addons/registry-svc.yaml
I0913 23:27:57.204688 17442 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0913 23:27:57.204778 17442 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0913 23:27:57.205023 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3076031724 /etc/kubernetes/addons/yakd-ns.yaml
I0913 23:27:57.205215 17442 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0913 23:27:57.205241 17442 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0913 23:27:57.205725 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2279727050 /etc/kubernetes/addons/ig-namespace.yaml
I0913 23:27:57.205734 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.205755 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.206692 17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0913 23:27:57.206712 17442 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0913 23:27:57.206810 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3991240121 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0913 23:27:57.214805 17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0913 23:27:57.214832 17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0913 23:27:57.214947 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube929751967 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0913 23:27:57.215418 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.217130 17442 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0913 23:27:57.219708 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0913 23:27:57.220771 17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0913 23:27:57.221166 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0913 23:27:57.221433 17442 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0913 23:27:57.222045 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube123773802 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0913 23:27:57.222236 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.225914 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0913 23:27:57.226260 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0913 23:27:57.227601 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0913 23:27:57.228900 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0913 23:27:57.230708 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0913 23:27:57.231721 17442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0913 23:27:57.234090 17442 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0913 23:27:57.236322 17442 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0913 23:27:57.236349 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0913 23:27:57.236517 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2985886460 /etc/kubernetes/addons/registry-proxy.yaml
I0913 23:27:57.236907 17442 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0913 23:27:57.236974 17442 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0913 23:27:57.237163 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1771172411 /etc/kubernetes/addons/yakd-sa.yaml
I0913 23:27:57.237926 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0913 23:27:57.237949 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0913 23:27:57.237973 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0913 23:27:57.238090 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1626312397 /etc/kubernetes/addons/storage-provisioner.yaml
I0913 23:27:57.238101 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327141144 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0913 23:27:57.238816 17442 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0913 23:27:57.238842 17442 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0913 23:27:57.238947 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4135178880 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0913 23:27:57.240633 17442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0913 23:27:57.242138 17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0913 23:27:57.242161 17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0913 23:27:57.242283 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3184474248 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0913 23:27:57.243575 17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0913 23:27:57.243598 17442 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0913 23:27:57.243601 17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0913 23:27:57.243619 17442 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0913 23:27:57.243744 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414221854 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0913 23:27:57.243992 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3262758196 /etc/kubernetes/addons/metrics-server-service.yaml
I0913 23:27:57.245951 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:27:57.246357 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0913 23:27:57.246458 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.246504 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.253668 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0913 23:27:57.254203 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0913 23:27:57.255119 17442 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0913 23:27:57.260937 17442 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0913 23:27:57.260990 17442 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0913 23:27:57.261006 17442 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0913 23:27:57.261116 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube751495581 /etc/kubernetes/addons/yakd-crb.yaml
I0913 23:27:57.261116 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229584318 /etc/kubernetes/addons/ig-role.yaml
I0913 23:27:57.267971 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0913 23:27:57.268434 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0913 23:27:57.268457 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0913 23:27:57.268560 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3705089436 /etc/kubernetes/addons/rbac-hostpath.yaml
I0913 23:27:57.268693 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:27:57.268724 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:27:57.275120 17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0913 23:27:57.275150 17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0913 23:27:57.275379 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube766263024 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0913 23:27:57.281748 17442 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0913 23:27:57.281783 17442 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0913 23:27:57.281906 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3060564583 /etc/kubernetes/addons/yakd-svc.yaml
I0913 23:27:57.286437 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0913 23:27:57.290240 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.290270 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.295227 17442 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0913 23:27:57.295258 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0913 23:27:57.295402 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229927536 /etc/kubernetes/addons/yakd-dp.yaml
I0913 23:27:57.296224 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:27:57.296249 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:27:57.300333 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.300377 17442 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0913 23:27:57.300405 17442 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0913 23:27:57.300412 17442 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0913 23:27:57.300458 17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0913 23:27:57.301410 17442 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0913 23:27:57.301441 17442 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0913 23:27:57.301574 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube76485290 /etc/kubernetes/addons/ig-rolebinding.yaml
I0913 23:27:57.304376 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:27:57.306988 17442 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0913 23:27:57.308074 17442 out.go:177] - Using image docker.io/busybox:stable
I0913 23:27:57.309443 17442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 23:27:57.309474 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0913 23:27:57.309664 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1904369519 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 23:27:57.317292 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0913 23:27:57.324054 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0913 23:27:57.324081 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0913 23:27:57.324206 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194742819 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0913 23:27:57.325912 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0913 23:27:57.326591 17442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0913 23:27:57.326613 17442 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0913 23:27:57.326716 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2293852135 /etc/kubernetes/addons/ig-clusterrole.yaml
I0913 23:27:57.327346 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0913 23:27:57.327373 17442 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0913 23:27:57.327486 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3986840925 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0913 23:27:57.338418 17442 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 23:27:57.338449 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0913 23:27:57.338566 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1371611737 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 23:27:57.338795 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0913 23:27:57.338818 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0913 23:27:57.338908 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube613785979 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0913 23:27:57.351014 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 23:27:57.359395 17442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0913 23:27:57.359425 17442 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0913 23:27:57.359540 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2815131364 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0913 23:27:57.360319 17442 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0913 23:27:57.360444 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube639427477 /etc/kubernetes/addons/storageclass.yaml
I0913 23:27:57.388178 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0913 23:27:57.412399 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0913 23:27:57.412434 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0913 23:27:57.412572 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3973050986 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0913 23:27:57.428040 17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0913 23:27:57.428077 17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0913 23:27:57.428197 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3028262753 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0913 23:27:57.434296 17442 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0913 23:27:57.434323 17442 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0913 23:27:57.434449 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube905477838 /etc/kubernetes/addons/ig-crd.yaml
I0913 23:27:57.464006 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0913 23:27:57.464044 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0913 23:27:57.464220 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube544029514 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0913 23:27:57.502822 17442 exec_runner.go:51] Run: sudo systemctl start kubelet
I0913 23:27:57.502822 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0913 23:27:57.502957 17442 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0913 23:27:57.503108 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1704774810 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0913 23:27:57.516500 17442 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0913 23:27:57.516533 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0913 23:27:57.518903 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3893659286 /etc/kubernetes/addons/ig-daemonset.yaml
I0913 23:27:57.566647 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0913 23:27:57.588510 17442 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0913 23:27:57.590959 17442 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0913 23:27:57.590982 17442 node_ready.go:38] duration metric: took 2.436908ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0913 23:27:57.590994 17442 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0913 23:27:57.599038 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0913 23:27:57.599075 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0913 23:27:57.599222 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube924088179 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0913 23:27:57.604688 17442 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:27:57.646801 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0913 23:27:57.646835 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0913 23:27:57.646993 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2813958811 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0913 23:27:57.708518 17442 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0913 23:27:57.738945 17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 23:27:57.738992 17442 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0913 23:27:57.739141 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3590475104 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 23:27:57.865690 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0913 23:27:58.177489 17442 addons.go:475] Verifying addon registry=true in "minikube"
I0913 23:27:58.180638 17442 out.go:177] * Verifying registry addon...
I0913 23:27:58.183465 17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0913 23:27:58.187246 17442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0913 23:27:58.187267 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:27:58.208909 17442 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0913 23:27:58.222182 17442 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0913 23:27:58.418624 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163562492s)
I0913 23:27:58.448000 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.161507323s)
I0913 23:27:58.448035 17442 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0913 23:27:58.665832 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.339874126s)
I0913 23:27:58.689887 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:27:58.706577 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.139796241s)
I0913 23:27:59.195648 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:27:59.213372 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.862281521s)
W0913 23:27:59.213487 17442 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0913 23:27:59.213685 17442 retry.go:31] will retry after 143.241383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0913 23:27:59.359243 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0913 23:27:59.618398 17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0913 23:27:59.692502 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:00.200887 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:00.258799 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.032845056s)
I0913 23:28:00.590595 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.724842158s)
I0913 23:28:00.590640 17442 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0913 23:28:00.606332 17442 out.go:177] * Verifying csi-hostpath-driver addon...
I0913 23:28:00.609194 17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0913 23:28:00.613287 17442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0913 23:28:00.613311 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:00.714064 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:01.114348 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:01.217763 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:01.613812 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:01.686868 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:02.110017 17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0913 23:28:02.112949 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:02.213113 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:02.553505 17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.194195078s)
I0913 23:28:02.614225 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:02.687289 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:03.113274 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:03.186707 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:03.613258 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:03.687268 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:04.110441 17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0913 23:28:04.113154 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:04.195685 17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0913 23:28:04.195864 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube804519332 /var/lib/minikube/google_application_credentials.json
I0913 23:28:04.206536 17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0913 23:28:04.206659 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2477774900 /var/lib/minikube/google_cloud_project
I0913 23:28:04.215433 17442 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0913 23:28:04.215480 17442 host.go:66] Checking if "minikube" exists ...
I0913 23:28:04.216022 17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0913 23:28:04.216041 17442 api_server.go:166] Checking apiserver status ...
I0913 23:28:04.216075 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:28:04.232537 17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
I0913 23:28:04.241371 17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
I0913 23:28:04.241423 17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
I0913 23:28:04.249675 17442 api_server.go:204] freezer state: "THAWED"
I0913 23:28:04.249700 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:28:04.317704 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:28:04.317780 17442 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0913 23:28:04.318829 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:04.393952 17442 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0913 23:28:04.471556 17442 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0913 23:28:04.534702 17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0913 23:28:04.534771 17442 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0913 23:28:04.534919 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4106205132 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0913 23:28:04.544587 17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0913 23:28:04.544616 17442 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0913 23:28:04.544719 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4264924740 /etc/kubernetes/addons/gcp-auth-service.yaml
I0913 23:28:04.553557 17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 23:28:04.553586 17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0913 23:28:04.553700 17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2302772220 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 23:28:04.562301 17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0913 23:28:04.612989 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:04.687352 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:04.935042 17442 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0913 23:28:04.937730 17442 out.go:177] * Verifying gcp-auth addon...
I0913 23:28:04.940478 17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0913 23:28:04.943465 17442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0913 23:28:05.113111 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:05.187234 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:05.609838 17442 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:05.609862 17442 pod_ready.go:82] duration metric: took 8.005138408s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.609873 17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.613140 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:05.614409 17442 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:05.614426 17442 pod_ready.go:82] duration metric: took 4.545618ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.614434 17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.618191 17442 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:05.618210 17442 pod_ready.go:82] duration metric: took 3.769259ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.618221 17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ccmtg" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.621999 17442 pod_ready.go:93] pod "kube-proxy-ccmtg" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:05.622016 17442 pod_ready.go:82] duration metric: took 3.788187ms for pod "kube-proxy-ccmtg" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.622027 17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.626105 17442 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:05.626121 17442 pod_ready.go:82] duration metric: took 4.086761ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.626132 17442 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace to be "Ready" ...
I0913 23:28:05.713292 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:06.113698 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:06.186966 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:06.409150 17442 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace has status "Ready":"True"
I0913 23:28:06.409179 17442 pod_ready.go:82] duration metric: took 783.038714ms for pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace to be "Ready" ...
I0913 23:28:06.409188 17442 pod_ready.go:39] duration metric: took 8.818182714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0913 23:28:06.409213 17442 api_server.go:52] waiting for apiserver process to appear ...
I0913 23:28:06.409281 17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0913 23:28:06.429351 17442 api_server.go:72] duration metric: took 9.351869727s to wait for apiserver process to appear ...
I0913 23:28:06.429379 17442 api_server.go:88] waiting for apiserver healthz status ...
I0913 23:28:06.429402 17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0913 23:28:06.433500 17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0913 23:28:06.434460 17442 api_server.go:141] control plane version: v1.31.1
I0913 23:28:06.434485 17442 api_server.go:131] duration metric: took 5.098058ms to wait for apiserver health ...
I0913 23:28:06.434506 17442 system_pods.go:43] waiting for kube-system pods to appear ...
I0913 23:28:06.613454 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:06.615437 17442 system_pods.go:59] 17 kube-system pods found
I0913 23:28:06.615461 17442 system_pods.go:61] "coredns-7c65d6cfc9-khsrk" [f4523de2-fd1a-4429-8bb0-593cdcebc8d3] Running
I0913 23:28:06.615469 17442 system_pods.go:61] "csi-hostpath-attacher-0" [23867dcc-737c-47e4-b93d-ae6177be3088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0913 23:28:06.615474 17442 system_pods.go:61] "csi-hostpath-resizer-0" [9bacd521-508a-4d14-be54-d8ef696790e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0913 23:28:06.615484 17442 system_pods.go:61] "csi-hostpathplugin-qh7d2" [d24f9989-c47a-4568-81d6-b463704b2bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0913 23:28:06.615488 17442 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6ca55116-9e5f-4b31-a5d1-22ca33473230] Running
I0913 23:28:06.615492 17442 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [11e78520-bf53-4063-b809-908d3527afdf] Running
I0913 23:28:06.615495 17442 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [adc882dd-b519-4eea-a197-4f596b408017] Running
I0913 23:28:06.615498 17442 system_pods.go:61] "kube-proxy-ccmtg" [73fdfb4e-c926-4ccd-b35e-5df2208acfb3] Running
I0913 23:28:06.615501 17442 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [56c0780d-d761-48fe-a15f-1b1113e80709] Running
I0913 23:28:06.615505 17442 system_pods.go:61] "metrics-server-84c5f94fbc-trf62" [f9b8c6b4-aeff-4750-9bb3-7e3953c5258c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0913 23:28:06.615510 17442 system_pods.go:61] "nvidia-device-plugin-daemonset-v6s2b" [44534f93-107d-44e4-a5f2-7fd26e600251] Running
I0913 23:28:06.615515 17442 system_pods.go:61] "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0913 23:28:06.615520 17442 system_pods.go:61] "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0913 23:28:06.615532 17442 system_pods.go:61] "snapshot-controller-56fcc65765-2ch4n" [7ff7b1c1-f704-4f2d-b60a-a94b57b31f28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 23:28:06.615540 17442 system_pods.go:61] "snapshot-controller-56fcc65765-zkp5d" [a54db17e-a825-43cb-99e5-5344c179faa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 23:28:06.615546 17442 system_pods.go:61] "storage-provisioner" [26a3ab50-0a6c-4ffd-a198-a6437b518845] Running
I0913 23:28:06.615554 17442 system_pods.go:61] "tiller-deploy-b48cc5f79-4688p" [baecb9b8-e7ad-4c4a-a256-fb125074d61b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0913 23:28:06.615561 17442 system_pods.go:74] duration metric: took 181.045208ms to wait for pod list to return data ...
I0913 23:28:06.615570 17442 default_sa.go:34] waiting for default service account to be created ...
I0913 23:28:06.687543 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:06.809286 17442 default_sa.go:45] found service account: "default"
I0913 23:28:06.809312 17442 default_sa.go:55] duration metric: took 193.734262ms for default service account to be created ...
I0913 23:28:06.809322 17442 system_pods.go:116] waiting for k8s-apps to be running ...
I0913 23:28:07.014188 17442 system_pods.go:86] 17 kube-system pods found
I0913 23:28:07.014224 17442 system_pods.go:89] "coredns-7c65d6cfc9-khsrk" [f4523de2-fd1a-4429-8bb0-593cdcebc8d3] Running
I0913 23:28:07.014233 17442 system_pods.go:89] "csi-hostpath-attacher-0" [23867dcc-737c-47e4-b93d-ae6177be3088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0913 23:28:07.014239 17442 system_pods.go:89] "csi-hostpath-resizer-0" [9bacd521-508a-4d14-be54-d8ef696790e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0913 23:28:07.014246 17442 system_pods.go:89] "csi-hostpathplugin-qh7d2" [d24f9989-c47a-4568-81d6-b463704b2bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0913 23:28:07.014251 17442 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6ca55116-9e5f-4b31-a5d1-22ca33473230] Running
I0913 23:28:07.014256 17442 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [11e78520-bf53-4063-b809-908d3527afdf] Running
I0913 23:28:07.014261 17442 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [adc882dd-b519-4eea-a197-4f596b408017] Running
I0913 23:28:07.014265 17442 system_pods.go:89] "kube-proxy-ccmtg" [73fdfb4e-c926-4ccd-b35e-5df2208acfb3] Running
I0913 23:28:07.014268 17442 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [56c0780d-d761-48fe-a15f-1b1113e80709] Running
I0913 23:28:07.014274 17442 system_pods.go:89] "metrics-server-84c5f94fbc-trf62" [f9b8c6b4-aeff-4750-9bb3-7e3953c5258c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0913 23:28:07.014280 17442 system_pods.go:89] "nvidia-device-plugin-daemonset-v6s2b" [44534f93-107d-44e4-a5f2-7fd26e600251] Running
I0913 23:28:07.014286 17442 system_pods.go:89] "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0913 23:28:07.014294 17442 system_pods.go:89] "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0913 23:28:07.014301 17442 system_pods.go:89] "snapshot-controller-56fcc65765-2ch4n" [7ff7b1c1-f704-4f2d-b60a-a94b57b31f28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 23:28:07.014307 17442 system_pods.go:89] "snapshot-controller-56fcc65765-zkp5d" [a54db17e-a825-43cb-99e5-5344c179faa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0913 23:28:07.014311 17442 system_pods.go:89] "storage-provisioner" [26a3ab50-0a6c-4ffd-a198-a6437b518845] Running
I0913 23:28:07.014323 17442 system_pods.go:89] "tiller-deploy-b48cc5f79-4688p" [baecb9b8-e7ad-4c4a-a256-fb125074d61b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0913 23:28:07.014330 17442 system_pods.go:126] duration metric: took 205.001192ms to wait for k8s-apps to be running ...
I0913 23:28:07.014342 17442 system_svc.go:44] waiting for kubelet service to be running ....
I0913 23:28:07.014392 17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0913 23:28:07.026453 17442 system_svc.go:56] duration metric: took 12.10027ms WaitForService to wait for kubelet
I0913 23:28:07.026484 17442 kubeadm.go:582] duration metric: took 9.94902379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0913 23:28:07.026508 17442 node_conditions.go:102] verifying NodePressure condition ...
I0913 23:28:07.114061 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:07.187042 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:07.209621 17442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0913 23:28:07.209644 17442 node_conditions.go:123] node cpu capacity is 8
I0913 23:28:07.209655 17442 node_conditions.go:105] duration metric: took 183.142206ms to run NodePressure ...
I0913 23:28:07.209669 17442 start.go:241] waiting for startup goroutines ...
I0913 23:28:07.660930 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:07.686656 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:08.113918 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:08.186516 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:08.614408 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:08.687971 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:09.113705 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:09.187531 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:09.614251 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:09.687432 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:10.163034 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:10.187003 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:10.613039 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:10.687212 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:11.113325 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:11.187369 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:11.614044 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:11.687211 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:12.114681 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:12.188063 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:12.614060 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:12.687342 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:13.113893 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:13.187015 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:13.614623 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:13.714370 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:14.113532 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:14.186792 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:14.614484 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:14.713667 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:15.113470 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:15.186451 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:15.614690 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:15.687660 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:16.113732 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:16.186428 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:16.614450 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:16.687557 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:17.114196 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:17.186555 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:17.613522 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:17.687089 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:18.114931 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:18.187122 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:18.613753 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:18.686964 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:19.115012 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:19.187641 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:19.613471 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:19.687737 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:20.113715 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:20.186952 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:20.612958 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:20.686835 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0913 23:28:21.113655 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:21.186861 17442 kapi.go:107] duration metric: took 23.003383575s to wait for kubernetes.io/minikube-addons=registry ...
I0913 23:28:21.613945 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:22.113866 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:22.613881 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:23.113883 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:23.613009 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:24.114085 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:24.614175 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:25.114159 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:25.669718 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:26.113432 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:26.613559 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:27.113299 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:27.614422 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:28.113230 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:28.614256 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:29.113804 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:29.615859 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:30.113486 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:30.613705 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:31.112928 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:31.614348 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:32.113810 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:32.613817 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:33.164616 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:33.614121 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:34.113629 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:34.613633 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:35.114205 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:35.614026 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:36.114365 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0913 23:28:36.613961 17442 kapi.go:107] duration metric: took 36.004765397s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0913 23:28:46.443782 17442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0913 23:28:46.443805 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:46.943966 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:47.444366 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:47.944472 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:48.443683 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:48.943622 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:49.443611 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:49.943632 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:50.443857 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:50.943448 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:51.443431 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:51.943124 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:52.444424 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:52.944439 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:53.444624 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:53.944516 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:54.443056 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:54.944192 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:55.443987 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:55.943466 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:56.443747 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:56.943491 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:57.443402 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:57.943845 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:58.443980 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:58.943378 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:59.443272 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:28:59.944211 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:00.443585 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:00.943540 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:01.443218 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:01.944328 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:02.445059 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:02.943476 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:03.443803 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:03.943718 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:04.443449 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:04.943279 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:05.443089 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:05.943855 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:06.443994 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:06.943369 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:07.443174 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:07.943654 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:08.444871 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:08.945060 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:09.443998 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:09.943913 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:10.443414 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:10.943980 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:11.443800 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:11.944034 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:12.443952 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:12.943514 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:13.443554 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:13.943267 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:14.444447 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:14.943233 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:15.444177 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:15.944365 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:16.444555 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:16.943186 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:17.443040 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:17.943526 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:18.444689 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:18.943399 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:19.444695 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:19.943372 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:20.444267 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:20.944044 17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0913 23:29:21.443962 17442 kapi.go:107] duration metric: took 1m16.503481915s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0913 23:29:21.445571 17442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0913 23:29:21.446718 17442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0913 23:29:21.447751 17442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0913 23:29:21.449027 17442 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, helm-tiller, yakd, storage-provisioner, metrics-server, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0913 23:29:21.450111 17442 addons.go:510] duration metric: took 1m24.377211463s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass helm-tiller yakd storage-provisioner metrics-server storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0913 23:29:21.450148 17442 start.go:246] waiting for cluster config update ...
I0913 23:29:21.450165 17442 start.go:255] writing updated cluster config ...
I0913 23:29:21.450599 17442 exec_runner.go:51] Run: rm -f paused
I0913 23:29:21.493240 17442 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
I0913 23:29:21.494882 17442 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Fri 2024-09-13 23:39:12 UTC. --
Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.992217221Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.992241347Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.993734122Z" level=error msg="Error running exec e533f7a03195558123262731ef7964f0e29165461c449942a36827d1f30482cd in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 13 23:31:35 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:35.130622345Z" level=info msg="ignoring event" container=711372ea7579112482324d82bbc767aad2fef2ae5a068500dc91ba42e38c88e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:33:04 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:33:04.491123873Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 23:33:04 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:33:04.493465230Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 23:34:22 ubuntu-20-agent-2 cri-dockerd[17987]: time="2024-09-13T23:34:22Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935673884Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935669980Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935911439Z" level=error msg="Error running exec 40ac74a1491c7b0b370cb6bcf953fe11636562f6624d8cd9aa345d65fd83f458 in container: cannot exec in a stopped state: unknown"
Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.943122077Z" level=info msg="ignoring event" container=21d89e8c1a06a530e62ca2bdcd0d4cf1e12769ab17ae82358d960c8eec1fd249 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:35:45 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:35:45.484945738Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 23:35:45 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:35:45.487200517Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 13 23:38:11 ubuntu-20-agent-2 cri-dockerd[17987]: time="2024-09-13T23:38:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69e8a65e61ebca38d7f37818d06e359c55521b015cc4bf57250667a2368ba55/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 13 23:38:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:11.980178400Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:38:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:11.982337509Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:38:24 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:24.484186873Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:38:24 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:24.486265903Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:38:51 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:51.481120082Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:38:51 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:51.483461942Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.446426487Z" level=info msg="ignoring event" container=b69e8a65e61ebca38d7f37818d06e359c55521b015cc4bf57250667a2368ba55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.701274513Z" level=info msg="ignoring event" container=2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.764950687Z" level=info msg="ignoring event" container=e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.836799615Z" level=info msg="ignoring event" container=2a2488f1d912d046c9764b6f02a056816a6a02c93ec27e35b4420835e0bdd2a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.933189817Z" level=info msg="ignoring event" container=57b1ddfb7fe0df7119f5cbe758f89cde76790390c0558fb54b6c6cf09c9ea5ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
21d89e8c1a06a ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 bad6eca043379 gadget-sdn5b
827f05ddb756c gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 6771525fc52a5 gcp-auth-89d5ffd79-bzwl8
3e9bc9c04de01 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
542a7fb821151 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
704bc394e1da0 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
eea8d89601a01 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
5981dcf86164d registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
7b0dacdba10d7 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 42eed5f9ce08f csi-hostpath-attacher-0
11b667c711180 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 4a209c3f132b8 csi-hostpathplugin-qh7d2
76fc9ebfc91bc registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 4cf106afd0e45 csi-hostpath-resizer-0
36193ea709ac9 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 3d5674af1b935 snapshot-controller-56fcc65765-zkp5d
05cb335070790 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 1e728a024c656 snapshot-controller-56fcc65765-2ch4n
c567be6a1b8a1 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 577d9f72f4259 local-path-provisioner-86d989889c-8cpks
300fe28326e07 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 c127241428eee metrics-server-84c5f94fbc-trf62
1d02daefef76a ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 10 minutes ago Running tiller 0 0fcb9e41ae791 tiller-deploy-b48cc5f79-4688p
532282de56302 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 ce4593d26e19e yakd-dashboard-67d98fc6b-sw9q9
af0d99f86d4a8 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 01a3142769982 cloud-spanner-emulator-769b77f747-74qpc
4b6d0c6b881eb nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 ae82fe92e0a42 nvidia-device-plugin-daemonset-v6s2b
abb216077af8e 6e38f40d628db 11 minutes ago Running storage-provisioner 0 f4d912900c480 storage-provisioner
15e0a78ec98b2 c69fa2e9cbf5f 11 minutes ago Running coredns 0 662fe3b2cddb9 coredns-7c65d6cfc9-khsrk
019949511d0d9 60c005f310ff3 11 minutes ago Running kube-proxy 0 f7531a1d02e58 kube-proxy-ccmtg
d114732958c9f 9aa1fad941575 11 minutes ago Running kube-scheduler 0 15a502d6c5692 kube-scheduler-ubuntu-20-agent-2
107eb8f53a410 2e96e5913fc06 11 minutes ago Running etcd 0 519ac31b50293 etcd-ubuntu-20-agent-2
3250f8e27aaa1 6bab7719df100 11 minutes ago Running kube-apiserver 0 e0deb68e6886c kube-apiserver-ubuntu-20-agent-2
60a40e57ebb9b 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 ee6fb593cdabf kube-controller-manager-ubuntu-20-agent-2
==> coredns [15e0a78ec98b] <==
[INFO] 10.244.0.9:44110 - 50929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000098658s
[INFO] 10.244.0.9:53793 - 2613 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047901s
[INFO] 10.244.0.9:53793 - 44599 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066843s
[INFO] 10.244.0.9:43965 - 12697 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000071953s
[INFO] 10.244.0.9:43965 - 5014 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000187309s
[INFO] 10.244.0.9:51362 - 26271 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000065164s
[INFO] 10.244.0.9:51362 - 32668 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000108995s
[INFO] 10.244.0.9:46391 - 59099 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061516s
[INFO] 10.244.0.9:46391 - 20902 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000093824s
[INFO] 10.244.0.9:42069 - 35515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000051725s
[INFO] 10.244.0.9:42069 - 20925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010539s
[INFO] 10.244.0.23:59229 - 7472 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254658s
[INFO] 10.244.0.23:47377 - 42980 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190858s
[INFO] 10.244.0.23:57305 - 3750 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133876s
[INFO] 10.244.0.23:58058 - 63279 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179995s
[INFO] 10.244.0.23:40465 - 16597 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135813s
[INFO] 10.244.0.23:56245 - 26711 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157066s
[INFO] 10.244.0.23:59491 - 60528 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003573625s
[INFO] 10.244.0.23:41293 - 11840 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003614149s
[INFO] 10.244.0.23:46282 - 55483 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.001955201s
[INFO] 10.244.0.23:60194 - 63676 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003349848s
[INFO] 10.244.0.23:58776 - 38207 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.001689342s
[INFO] 10.244.0.23:59639 - 5780 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003140238s
[INFO] 10.244.0.23:41359 - 59953 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002283234s
[INFO] 10.244.0.23:37101 - 21190 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002453658s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_13T23_27_52_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 13 Sep 2024 23:27:49 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Fri, 13 Sep 2024 23:39:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 13 Sep 2024 23:35:02 +0000 Fri, 13 Sep 2024 23:27:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 13 Sep 2024 23:35:02 +0000 Fri, 13 Sep 2024 23:27:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 13 Sep 2024 23:35:02 +0000 Fri, 13 Sep 2024 23:27:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 13 Sep 2024 23:35:02 +0000 Fri, 13 Sep 2024 23:27:50 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859308Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859308Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: 5f8c688f-34c7-4408-80b6-0648374c9e56
Kernel Version: 5.15.0-1068-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-769b77f747-74qpc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-sdn5b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-bzwl8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-khsrk 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-qh7d2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-ccmtg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-trf62 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-v6s2b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-2ch4n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-zkp5d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system tiller-deploy-b48cc5f79-4688p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-8cpks 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-sw9q9 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 be 3f 53 15 e6 08 06
[ +1.104115] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a f2 cd 60 e7 38 08 06
[ +0.033569] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 0d 87 06 e9 fd 08 06
[ +2.547839] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 33 ff 72 38 65 08 06
[ +1.885595] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 55 f6 8a e7 2f 08 06
[ +1.825224] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 7b 0d 77 9b 66 08 06
[ +4.615348] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 a1 5f ac 9a 1a 08 06
[ +0.053690] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 4b fa bc 85 b0 08 06
[ +0.226597] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ea df 1e e6 98 4d 08 06
[Sep13 23:29] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e af 98 60 99 d6 08 06
[ +0.027122] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 02 73 62 ab ea 08 06
[ +11.178978] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff ea ff 80 b0 6e 3c 08 06
[ +0.000406] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 69 96 0e 7e 4f 08 06
==> etcd [107eb8f53a41] <==
{"level":"info","ts":"2024-09-13T23:27:49.032846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-09-13T23:27:49.032857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-09-13T23:27:49.032863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-13T23:27:49.032872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-13T23:27:49.032881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-13T23:27:49.033825Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-13T23:27:49.033826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-13T23:27:49.033825Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-13T23:27:49.033830Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T23:27:49.034120Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-13T23:27:49.034143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-13T23:27:49.034519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T23:27:49.034756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T23:27:49.034910Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-13T23:27:49.035084Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-13T23:27:49.035107Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-13T23:27:49.035922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-13T23:27:49.036273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-09-13T23:28:25.565715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.099599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2024-09-13T23:28:25.565770Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.677327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f4f17efe0a4382\" ","response":"range_response_count:1 size:927"}
{"level":"info","ts":"2024-09-13T23:28:25.565794Z","caller":"traceutil/trace.go:171","msg":"trace[1806587783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"123.195982ms","start":"2024-09-13T23:28:25.442586Z","end":"2024-09-13T23:28:25.565782Z","steps":["trace[1806587783] 'range keys from in-memory index tree' (duration: 123.048641ms)"],"step_count":1}
{"level":"info","ts":"2024-09-13T23:28:25.565816Z","caller":"traceutil/trace.go:171","msg":"trace[1872658424] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f4f17efe0a4382; range_end:; response_count:1; response_revision:994; }","duration":"116.736972ms","start":"2024-09-13T23:28:25.449067Z","end":"2024-09-13T23:28:25.565804Z","steps":["trace[1872658424] 'range keys from in-memory index tree' (duration: 116.542816ms)"],"step_count":1}
{"level":"info","ts":"2024-09-13T23:37:49.051382Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1712}
{"level":"info","ts":"2024-09-13T23:37:49.075410Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1712,"took":"23.483501ms","hash":3552401535,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4403200,"current-db-size-in-use":"4.4 MB"}
{"level":"info","ts":"2024-09-13T23:37:49.075453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3552401535,"revision":1712,"compact-revision":-1}
==> gcp-auth [827f05ddb756] <==
2024/09/13 23:29:20 GCP Auth Webhook started!
2024/09/13 23:29:36 Ready to marshal response ...
2024/09/13 23:29:36 Ready to write response ...
2024/09/13 23:29:37 Ready to marshal response ...
2024/09/13 23:29:37 Ready to write response ...
2024/09/13 23:29:58 Ready to marshal response ...
2024/09/13 23:29:58 Ready to write response ...
2024/09/13 23:29:59 Ready to marshal response ...
2024/09/13 23:29:59 Ready to write response ...
2024/09/13 23:29:59 Ready to marshal response ...
2024/09/13 23:29:59 Ready to write response ...
2024/09/13 23:38:11 Ready to marshal response ...
2024/09/13 23:38:11 Ready to write response ...
==> kernel <==
23:39:12 up 21 min, 0 users, load average: 0.05, 0.30, 0.39
Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [3250f8e27aaa] <==
W0913 23:28:40.203029 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.53.178:443: connect: connection refused
W0913 23:28:45.931799 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
E0913 23:28:45.931834 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
W0913 23:29:07.959667 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
E0913 23:29:07.959700 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
W0913 23:29:07.968621 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
E0913 23:29:07.968662 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
I0913 23:29:36.753711 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0913 23:29:36.770826 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0913 23:29:49.162201 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0913 23:29:49.171815 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0913 23:29:49.313660 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 23:29:49.315842 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 23:29:49.316348 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0913 23:29:49.367560 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0913 23:29:49.411053 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0913 23:29:49.443496 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0913 23:29:49.500516 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0913 23:29:50.188462 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0913 23:29:50.358264 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0913 23:29:50.368220 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0913 23:29:50.484790 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0913 23:29:50.500768 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0913 23:29:50.526533 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0913 23:29:50.681799 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [60a40e57ebb9] <==
W0913 23:37:45.422041 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:37:45.422087 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:37:56.053950 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:37:56.053992 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:02.083985 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:02.084025 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:16.131898 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:16.131944 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:20.386879 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:20.386917 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:22.557216 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:22.557259 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:27.736637 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:27.736677 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:44.636972 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:44.637024 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:50.376995 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:50.377046 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:53.487453 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:53.487491 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:38:56.027972 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:38:56.028014 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0913 23:39:03.361730 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0913 23:39:03.361772 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0913 23:39:11.668075 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.251µs"
==> kube-proxy [019949511d0d] <==
I0913 23:27:59.081320 1 server_linux.go:66] "Using iptables proxy"
I0913 23:27:59.237625 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0913 23:27:59.237701 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0913 23:27:59.305437 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0913 23:27:59.305500 1 server_linux.go:169] "Using iptables Proxier"
I0913 23:27:59.309511 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0913 23:27:59.310208 1 server.go:483] "Version info" version="v1.31.1"
I0913 23:27:59.310825 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0913 23:27:59.312365 1 config.go:199] "Starting service config controller"
I0913 23:27:59.312391 1 shared_informer.go:313] Waiting for caches to sync for service config
I0913 23:27:59.312424 1 config.go:105] "Starting endpoint slice config controller"
I0913 23:27:59.312450 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0913 23:27:59.313285 1 config.go:328] "Starting node config controller"
I0913 23:27:59.313319 1 shared_informer.go:313] Waiting for caches to sync for node config
I0913 23:27:59.413518 1 shared_informer.go:320] Caches are synced for node config
I0913 23:27:59.413549 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0913 23:27:59.413569 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [d114732958c9] <==
W0913 23:27:49.904981 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0913 23:27:49.904996 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0913 23:27:49.905008 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
E0913 23:27:49.905010 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 23:27:49.905033 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0913 23:27:49.905054 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 23:27:49.905109 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0913 23:27:49.905135 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0913 23:27:50.739043 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0913 23:27:50.739090 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0913 23:27:50.801523 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0913 23:27:50.801579 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 23:27:50.813862 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0913 23:27:50.813895 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0913 23:27:50.828130 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0913 23:27:50.828159 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0913 23:27:50.960629 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0913 23:27:50.960677 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0913 23:27:50.962487 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0913 23:27:50.962526 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0913 23:27:51.024964 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0913 23:27:51.025009 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0913 23:27:51.086388 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0913 23:27:51.086428 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0913 23:27:53.103279 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Fri 2024-09-13 23:39:12 UTC. --
Sep 13 23:39:04 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:04.340268 18873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0aafacff-c327-44f0-8b19-d7c6a9a05a27"
Sep 13 23:39:05 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:05.340024 18873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="fc311dc8-dddc-4bc6-b618-190017fbb792"
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637325 18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds\") pod \"fc311dc8-dddc-4bc6-b618-190017fbb792\" (UID: \"fc311dc8-dddc-4bc6-b618-190017fbb792\") "
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637386 18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lvm9\" (UniqueName: \"kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9\") pod \"fc311dc8-dddc-4bc6-b618-190017fbb792\" (UID: \"fc311dc8-dddc-4bc6-b618-190017fbb792\") "
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637434 18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "fc311dc8-dddc-4bc6-b618-190017fbb792" (UID: "fc311dc8-dddc-4bc6-b618-190017fbb792"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.639415 18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9" (OuterVolumeSpecName: "kube-api-access-9lvm9") pod "fc311dc8-dddc-4bc6-b618-190017fbb792" (UID: "fc311dc8-dddc-4bc6-b618-190017fbb792"). InnerVolumeSpecName "kube-api-access-9lvm9". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.737986 18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9lvm9\" (UniqueName: \"kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.738031 18873 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.040155 18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh2k9\" (UniqueName: \"kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9\") pod \"5e62917f-fbaf-47a4-ab23-4c40518c66e2\" (UID: \"5e62917f-fbaf-47a4-ab23-4c40518c66e2\") "
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.042169 18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9" (OuterVolumeSpecName: "kube-api-access-gh2k9") pod "5e62917f-fbaf-47a4-ab23-4c40518c66e2" (UID: "5e62917f-fbaf-47a4-ab23-4c40518c66e2"). InnerVolumeSpecName "kube-api-access-gh2k9". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.141307 18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4r6v\" (UniqueName: \"kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v\") pod \"2c97d680-312e-4178-b7a5-ec0b4dacb6a2\" (UID: \"2c97d680-312e-4178-b7a5-ec0b4dacb6a2\") "
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.141424 18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gh2k9\" (UniqueName: \"kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.143360 18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v" (OuterVolumeSpecName: "kube-api-access-r4r6v") pod "2c97d680-312e-4178-b7a5-ec0b4dacb6a2" (UID: "2c97d680-312e-4178-b7a5-ec0b4dacb6a2"). InnerVolumeSpecName "kube-api-access-r4r6v". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.222819 18873 scope.go:117] "RemoveContainer" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.241777 18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r4r6v\" (UniqueName: \"kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.244714 18873 scope.go:117] "RemoveContainer" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:12.246219 18873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.246261 18873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"} err="failed to get container status \"e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95\": rpc error: code = Unknown desc = Error response from daemon: No such container: e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.246286 18873 scope.go:117] "RemoveContainer" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.263157 18873 scope.go:117] "RemoveContainer" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:12.264140 18873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.264183 18873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"} err="failed to get container status \"2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.349916 18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c97d680-312e-4178-b7a5-ec0b4dacb6a2" path="/var/lib/kubelet/pods/2c97d680-312e-4178-b7a5-ec0b4dacb6a2/volumes"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.350313 18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e62917f-fbaf-47a4-ab23-4c40518c66e2" path="/var/lib/kubelet/pods/5e62917f-fbaf-47a4-ab23-4c40518c66e2/volumes"
Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.350651 18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc311dc8-dddc-4bc6-b618-190017fbb792" path="/var/lib/kubelet/pods/fc311dc8-dddc-4bc6-b618-190017fbb792/volumes"
==> storage-provisioner [abb216077af8] <==
I0913 23:27:59.660286 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0913 23:27:59.700394 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0913 23:27:59.700437 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0913 23:27:59.734316 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0913 23:27:59.734536 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f!
I0913 23:27:59.735816 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e28357d7-c01f-4c16-bd58-06cf827c1b74", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f became leader
I0913 23:27:59.835949 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Fri, 13 Sep 2024 23:29:59 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76hf (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-j76hf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m13s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m39s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m39s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m39s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m24s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m4s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.79s)