=== RUN TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.707367ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00382668s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004161297s
addons_test.go:342: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082394701s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/18 19:50:09 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:45847 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:37 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC | 18 Sep 24 19:38 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:40 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 18 Sep 24 19:40 UTC | 18 Sep 24 19:40 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 18 Sep 24 19:50 UTC | 18 Sep 24 19:50 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/18 19:38:34
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0918 19:38:34.907477 18358 out.go:345] Setting OutFile to fd 1 ...
I0918 19:38:34.907618 18358 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:38:34.907627 18358 out.go:358] Setting ErrFile to fd 2...
I0918 19:38:34.907634 18358 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:38:34.907830 18358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-7534/.minikube/bin
I0918 19:38:34.908455 18358 out.go:352] Setting JSON to false
I0918 19:38:34.909354 18358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1264,"bootTime":1726687051,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0918 19:38:34.909457 18358 start.go:139] virtualization: kvm guest
I0918 19:38:34.911772 18358 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0918 19:38:34.913476 18358 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-7534/.minikube/cache/preloaded-tarball: no such file or directory
I0918 19:38:34.913506 18358 out.go:177] - MINIKUBE_LOCATION=19667
I0918 19:38:34.913600 18358 notify.go:220] Checking for updates...
I0918 19:38:34.916199 18358 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0918 19:38:34.917549 18358 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19667-7534/kubeconfig
I0918 19:38:34.919005 18358 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-7534/.minikube
I0918 19:38:34.920237 18358 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0918 19:38:34.921486 18358 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0918 19:38:34.922753 18358 driver.go:394] Setting default libvirt URI to qemu:///system
I0918 19:38:34.933263 18358 out.go:177] * Using the none driver based on user configuration
I0918 19:38:34.934518 18358 start.go:297] selected driver: none
I0918 19:38:34.934530 18358 start.go:901] validating driver "none" against <nil>
I0918 19:38:34.934539 18358 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0918 19:38:34.934580 18358 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0918 19:38:34.934882 18358 out.go:270] ! The 'none' driver does not respect the --memory flag
I0918 19:38:34.935356 18358 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0918 19:38:34.935606 18358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0918 19:38:34.935638 18358 cni.go:84] Creating CNI manager for ""
I0918 19:38:34.935682 18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:38:34.935692 18358 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0918 19:38:34.935735 18358 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0918 19:38:34.937114 18358 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0918 19:38:34.938522 18358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json ...
I0918 19:38:34.938553 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json: {Name:mk471e6aea9507ca28f3d99688faa029c3efa2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:34.938674 18358 start.go:360] acquireMachinesLock for minikube: {Name:mke448a8cf98932a0732986be6ee893948db3617 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0918 19:38:34.938704 18358 start.go:364] duration metric: took 18.655µs to acquireMachinesLock for "minikube"
I0918 19:38:34.938716 18358 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0918 19:38:34.938777 18358 start.go:125] createHost starting for "" (driver="none")
I0918 19:38:34.940087 18358 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0918 19:38:34.941302 18358 exec_runner.go:51] Run: systemctl --version
I0918 19:38:34.943744 18358 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0918 19:38:34.943773 18358 client.go:168] LocalClient.Create starting
I0918 19:38:34.943833 18358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca.pem
I0918 19:38:34.943866 18358 main.go:141] libmachine: Decoding PEM data...
I0918 19:38:34.943892 18358 main.go:141] libmachine: Parsing certificate...
I0918 19:38:34.943946 18358 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-7534/.minikube/certs/cert.pem
I0918 19:38:34.943981 18358 main.go:141] libmachine: Decoding PEM data...
I0918 19:38:34.944018 18358 main.go:141] libmachine: Parsing certificate...
I0918 19:38:34.944366 18358 client.go:171] duration metric: took 584.636µs to LocalClient.Create
I0918 19:38:34.944387 18358 start.go:167] duration metric: took 648.126µs to libmachine.API.Create "minikube"
I0918 19:38:34.944394 18358 start.go:293] postStartSetup for "minikube" (driver="none")
I0918 19:38:34.944442 18358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0918 19:38:34.944470 18358 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0918 19:38:34.956477 18358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0918 19:38:34.956497 18358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0918 19:38:34.956505 18358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0918 19:38:34.958404 18358 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0918 19:38:34.959558 18358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7534/.minikube/addons for local assets ...
I0918 19:38:34.959598 18358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-7534/.minikube/files for local assets ...
I0918 19:38:34.959617 18358 start.go:296] duration metric: took 15.210878ms for postStartSetup
I0918 19:38:34.960969 18358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/config.json ...
I0918 19:38:34.961185 18358 start.go:128] duration metric: took 22.396746ms to createHost
I0918 19:38:34.961197 18358 start.go:83] releasing machines lock for "minikube", held for 22.484135ms
I0918 19:38:34.961903 18358 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0918 19:38:34.961915 18358 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0918 19:38:34.963877 18358 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0918 19:38:34.963939 18358 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0918 19:38:34.973194 18358 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0918 19:38:34.973225 18358 start.go:495] detecting cgroup driver to use...
I0918 19:38:34.973269 18358 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0918 19:38:34.973391 18358 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0918 19:38:34.994377 18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0918 19:38:35.003915 18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0918 19:38:35.014932 18358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0918 19:38:35.014981 18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0918 19:38:35.027598 18358 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0918 19:38:35.038253 18358 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0918 19:38:35.050503 18358 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0918 19:38:35.063151 18358 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0918 19:38:35.071314 18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0918 19:38:35.079341 18358 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0918 19:38:35.090935 18358 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0918 19:38:35.099310 18358 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0918 19:38:35.107007 18358 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0918 19:38:35.117755 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:35.315496 18358 exec_runner.go:51] Run: sudo systemctl restart containerd
I0918 19:38:35.380679 18358 start.go:495] detecting cgroup driver to use...
I0918 19:38:35.380725 18358 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0918 19:38:35.380829 18358 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0918 19:38:35.399969 18358 exec_runner.go:51] Run: which cri-dockerd
I0918 19:38:35.400823 18358 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0918 19:38:35.408304 18358 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0918 19:38:35.408320 18358 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0918 19:38:35.408351 18358 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0918 19:38:35.415985 18358 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0918 19:38:35.416124 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1070745449 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0918 19:38:35.423120 18358 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0918 19:38:35.637504 18358 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0918 19:38:35.851602 18358 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0918 19:38:35.851767 18358 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0918 19:38:35.851781 18358 exec_runner.go:203] rm: /etc/docker/daemon.json
I0918 19:38:35.851819 18358 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0918 19:38:35.860528 18358 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0918 19:38:35.860660 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube678808258 /etc/docker/daemon.json
I0918 19:38:35.868187 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:36.066089 18358 exec_runner.go:51] Run: sudo systemctl restart docker
I0918 19:38:36.356377 18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0918 19:38:36.366969 18358 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0918 19:38:36.381805 18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0918 19:38:36.392738 18358 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0918 19:38:36.610377 18358 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0918 19:38:36.826375 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:37.036994 18358 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0918 19:38:37.051723 18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0918 19:38:37.062064 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:37.282568 18358 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0918 19:38:37.347137 18358 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0918 19:38:37.347212 18358 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0918 19:38:37.348610 18358 start.go:563] Will wait 60s for crictl version
I0918 19:38:37.348661 18358 exec_runner.go:51] Run: which crictl
I0918 19:38:37.349542 18358 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0918 19:38:37.379000 18358 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.1
RuntimeApiVersion: v1
I0918 19:38:37.379063 18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0918 19:38:37.399329 18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0918 19:38:37.421679 18358 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
I0918 19:38:37.421767 18358 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0918 19:38:37.424400 18358 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0918 19:38:37.425489 18358 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0918 19:38:37.425593 18358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:37.425603 18358 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0918 19:38:37.425681 18358 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0918 19:38:37.425719 18358 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0918 19:38:37.471672 18358 cni.go:84] Creating CNI manager for ""
I0918 19:38:37.471693 18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:38:37.471702 18358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0918 19:38:37.471722 18358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0918 19:38:37.471847 18358 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0918 19:38:37.471901 18358 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0918 19:38:37.480725 18358 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0918 19:38:37.480774 18358 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0918 19:38:37.488341 18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0918 19:38:37.488343 18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0918 19:38:37.488400 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0918 19:38:37.488341 18358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0918 19:38:37.488416 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0918 19:38:37.488461 18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0918 19:38:37.500152 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0918 19:38:37.536043 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3334410818 /var/lib/minikube/binaries/v1.31.1/kubectl
I0918 19:38:37.538516 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2265485867 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0918 19:38:37.569706 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520207745 /var/lib/minikube/binaries/v1.31.1/kubelet
I0918 19:38:37.632845 18358 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0918 19:38:37.641138 18358 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0918 19:38:37.641155 18358 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0918 19:38:37.641192 18358 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0918 19:38:37.648760 18358 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0918 19:38:37.648896 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2331271920 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0918 19:38:37.656197 18358 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0918 19:38:37.656213 18358 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0918 19:38:37.656246 18358 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0918 19:38:37.663160 18358 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0918 19:38:37.663275 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube520074694 /lib/systemd/system/kubelet.service
I0918 19:38:37.670317 18358 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0918 19:38:37.670422 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2319329196 /var/tmp/minikube/kubeadm.yaml.new
I0918 19:38:37.677625 18358 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0918 19:38:37.678880 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:37.874779 18358 exec_runner.go:51] Run: sudo systemctl start kubelet
I0918 19:38:37.888013 18358 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube for IP: 10.138.0.48
I0918 19:38:37.888036 18358 certs.go:194] generating shared ca certs ...
I0918 19:38:37.888051 18358 certs.go:226] acquiring lock for ca certs: {Name:mk65b5fdc4f09d8572cba4b78a9b9522b46d6547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:37.888165 18358 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.key
I0918 19:38:37.888203 18358 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.key
I0918 19:38:37.888211 18358 certs.go:256] generating profile certs ...
I0918 19:38:37.888264 18358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key
I0918 19:38:37.888282 18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt with IP's: []
I0918 19:38:38.219920 18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt ...
I0918 19:38:38.219945 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.crt: {Name:mk7a305a245408683f9dc09eec8cdb01252d189d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.220068 18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key ...
I0918 19:38:38.220077 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/client.key: {Name:mkef102ad8868bb80cf4d3679d0c36d6221fcc8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.220136 18358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a
I0918 19:38:38.220151 18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0918 19:38:38.403245 18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0918 19:38:38.403272 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk8e8f8432a65feae42322cf5536789412a3a331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.403427 18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0918 19:38:38.403440 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkdf9557fc428470889289b76459a1ace027e047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.403501 18358 certs.go:381] copying /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt
I0918 19:38:38.403572 18358 certs.go:385] copying /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key
I0918 19:38:38.403621 18358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key
I0918 19:38:38.403634 18358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0918 19:38:38.795716 18358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt ...
I0918 19:38:38.795750 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt: {Name:mke9354a5935c18a60f656e73092a57c9dcd390a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.795922 18358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key ...
I0918 19:38:38.795937 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key: {Name:mk8fe9ea55d5723bad0c40dbc5858f67dda4edb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:38.796118 18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca-key.pem (1675 bytes)
I0918 19:38:38.796164 18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/ca.pem (1078 bytes)
I0918 19:38:38.796202 18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/cert.pem (1123 bytes)
I0918 19:38:38.796243 18358 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-7534/.minikube/certs/key.pem (1675 bytes)
I0918 19:38:38.796827 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0918 19:38:38.796968 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114346312 /var/lib/minikube/certs/ca.crt
I0918 19:38:38.806202 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0918 19:38:38.806349 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3190871643 /var/lib/minikube/certs/ca.key
I0918 19:38:38.814025 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0918 19:38:38.814143 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube714172427 /var/lib/minikube/certs/proxy-client-ca.crt
I0918 19:38:38.821790 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0918 19:38:38.821891 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2316883142 /var/lib/minikube/certs/proxy-client-ca.key
I0918 19:38:38.829377 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0918 19:38:38.829478 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2017926993 /var/lib/minikube/certs/apiserver.crt
I0918 19:38:38.837488 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0918 19:38:38.837590 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3103838905 /var/lib/minikube/certs/apiserver.key
I0918 19:38:38.844975 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0918 19:38:38.845077 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2199736669 /var/lib/minikube/certs/proxy-client.crt
I0918 19:38:38.854065 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0918 19:38:38.854171 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube591505349 /var/lib/minikube/certs/proxy-client.key
I0918 19:38:38.862457 18358 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0918 19:38:38.862475 18358 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.862502 18358 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.869981 18358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-7534/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0918 19:38:38.870151 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2523232502 /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.878287 18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0918 19:38:38.878398 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1139217587 /var/lib/minikube/kubeconfig
I0918 19:38:38.886121 18358 exec_runner.go:51] Run: openssl version
I0918 19:38:38.888859 18358 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0918 19:38:38.897024 18358 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.898368 18358 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.898439 18358 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0918 19:38:38.901230 18358 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0918 19:38:38.909165 18358 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0918 19:38:38.910233 18358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0918 19:38:38.910274 18358 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0918 19:38:38.910391 18358 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0918 19:38:38.925250 18358 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0918 19:38:38.933316 18358 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0918 19:38:38.941391 18358 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0918 19:38:38.961529 18358 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0918 19:38:38.969607 18358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0918 19:38:38.969627 18358 kubeadm.go:157] found existing configuration files:
I0918 19:38:38.969671 18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0918 19:38:38.977609 18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0918 19:38:38.977667 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0918 19:38:38.985052 18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0918 19:38:38.993469 18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0918 19:38:38.993512 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0918 19:38:39.000403 18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0918 19:38:39.007808 18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0918 19:38:39.007846 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0918 19:38:39.015780 18358 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0918 19:38:39.022837 18358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0918 19:38:39.022879 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0918 19:38:39.029518 18358 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0918 19:38:39.061084 18358 kubeadm.go:310] W0918 19:38:39.060975 19261 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0918 19:38:39.061559 18358 kubeadm.go:310] W0918 19:38:39.061524 19261 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0918 19:38:39.063220 18358 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0918 19:38:39.063281 18358 kubeadm.go:310] [preflight] Running pre-flight checks
I0918 19:38:39.150170 18358 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0918 19:38:39.150271 18358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0918 19:38:39.150280 18358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0918 19:38:39.150284 18358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0918 19:38:39.160278 18358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0918 19:38:39.163056 18358 out.go:235] - Generating certificates and keys ...
I0918 19:38:39.163100 18358 kubeadm.go:310] [certs] Using existing ca certificate authority
I0918 19:38:39.163115 18358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0918 19:38:39.312261 18358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0918 19:38:39.464336 18358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0918 19:38:39.646493 18358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0918 19:38:39.827871 18358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0918 19:38:40.037347 18358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0918 19:38:40.037384 18358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0918 19:38:40.418996 18358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0918 19:38:40.419094 18358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0918 19:38:40.909753 18358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0918 19:38:41.422374 18358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0918 19:38:41.607093 18358 kubeadm.go:310] [certs] Generating "sa" key and public key
I0918 19:38:41.607270 18358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0918 19:38:41.957417 18358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0918 19:38:42.094341 18358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0918 19:38:42.368072 18358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0918 19:38:42.649834 18358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0918 19:38:42.835540 18358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0918 19:38:42.836112 18358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0918 19:38:42.838352 18358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0918 19:38:42.840660 18358 out.go:235] - Booting up control plane ...
I0918 19:38:42.840685 18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0918 19:38:42.840700 18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0918 19:38:42.840707 18358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0918 19:38:42.860325 18358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0918 19:38:42.864371 18358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0918 19:38:42.864408 18358 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0918 19:38:43.102040 18358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0918 19:38:43.102065 18358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0918 19:38:43.603531 18358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.48967ms
I0918 19:38:43.603551 18358 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0918 19:38:47.605319 18358 kubeadm.go:310] [api-check] The API server is healthy after 4.0017528s
I0918 19:38:47.616560 18358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0918 19:38:47.626499 18358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0918 19:38:47.643236 18358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0918 19:38:47.643255 18358 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0918 19:38:47.651082 18358 kubeadm.go:310] [bootstrap-token] Using token: wjrm4f.2vgnn7i37ubo4hzx
I0918 19:38:47.652697 18358 out.go:235] - Configuring RBAC rules ...
I0918 19:38:47.652724 18358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0918 19:38:47.655891 18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0918 19:38:47.661473 18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0918 19:38:47.663804 18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0918 19:38:47.667212 18358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0918 19:38:47.669413 18358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0918 19:38:48.011654 18358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0918 19:38:48.434584 18358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0918 19:38:49.010931 18358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0918 19:38:49.011778 18358 kubeadm.go:310]
I0918 19:38:49.011796 18358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0918 19:38:49.011801 18358 kubeadm.go:310]
I0918 19:38:49.011806 18358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0918 19:38:49.011810 18358 kubeadm.go:310]
I0918 19:38:49.011814 18358 kubeadm.go:310] mkdir -p $HOME/.kube
I0918 19:38:49.011818 18358 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0918 19:38:49.011823 18358 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0918 19:38:49.011827 18358 kubeadm.go:310]
I0918 19:38:49.011831 18358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0918 19:38:49.011835 18358 kubeadm.go:310]
I0918 19:38:49.011840 18358 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0918 19:38:49.011844 18358 kubeadm.go:310]
I0918 19:38:49.011848 18358 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0918 19:38:49.011852 18358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0918 19:38:49.011857 18358 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0918 19:38:49.011862 18358 kubeadm.go:310]
I0918 19:38:49.011870 18358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0918 19:38:49.011874 18358 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0918 19:38:49.011880 18358 kubeadm.go:310]
I0918 19:38:49.011891 18358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wjrm4f.2vgnn7i37ubo4hzx \
I0918 19:38:49.011901 18358 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:28cdc99d0457e5db15d389dfa720477b3024488a6161fe0e97e3db0521042b91 \
I0918 19:38:49.011905 18358 kubeadm.go:310] --control-plane
I0918 19:38:49.011909 18358 kubeadm.go:310]
I0918 19:38:49.011913 18358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0918 19:38:49.011920 18358 kubeadm.go:310]
I0918 19:38:49.011924 18358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wjrm4f.2vgnn7i37ubo4hzx \
I0918 19:38:49.011927 18358 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:28cdc99d0457e5db15d389dfa720477b3024488a6161fe0e97e3db0521042b91
I0918 19:38:49.014652 18358 cni.go:84] Creating CNI manager for ""
I0918 19:38:49.014674 18358 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0918 19:38:49.016166 18358 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0918 19:38:49.017161 18358 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0918 19:38:49.026365 18358 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0918 19:38:49.026479 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4167341525 /etc/cni/net.d/1-k8s.conflist
I0918 19:38:49.037115 18358 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0918 19:38:49.037197 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:49.037214 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_18T19_38_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0918 19:38:49.046034 18358 ops.go:34] apiserver oom_adj: -16
I0918 19:38:49.100270 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:49.601112 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:50.100790 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:50.600275 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:51.100432 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:51.600922 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:52.100560 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:52.600236 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:53.101237 18358 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0918 19:38:53.162945 18358 kubeadm.go:1113] duration metric: took 4.125814699s to wait for elevateKubeSystemPrivileges
I0918 19:38:53.162982 18358 kubeadm.go:394] duration metric: took 14.252711851s to StartCluster
I0918 19:38:53.163007 18358 settings.go:142] acquiring lock: {Name:mk3846031f18742dba5e0055936aaf5360b0d10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:53.163095 18358 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19667-7534/kubeconfig
I0918 19:38:53.163675 18358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-7534/kubeconfig: {Name:mk35981c537c4532b3420938e79612e6eea6d7d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0918 19:38:53.163885 18358 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0918 19:38:53.163967 18358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0918 19:38:53.164050 18358 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:38:53.164082 18358 addons.go:69] Setting yakd=true in profile "minikube"
I0918 19:38:53.164089 18358 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0918 19:38:53.164086 18358 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0918 19:38:53.164100 18358 addons.go:234] Setting addon yakd=true in "minikube"
I0918 19:38:53.164104 18358 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0918 19:38:53.164093 18358 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0918 19:38:53.164107 18358 addons.go:69] Setting helm-tiller=true in profile "minikube"
I0918 19:38:53.164119 18358 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0918 19:38:53.164123 18358 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0918 19:38:53.164137 18358 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0918 19:38:53.164139 18358 addons.go:69] Setting metrics-server=true in profile "minikube"
I0918 19:38:53.164146 18358 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0918 19:38:53.164152 18358 addons.go:234] Setting addon metrics-server=true in "minikube"
I0918 19:38:53.164153 18358 mustload.go:65] Loading cluster: minikube
I0918 19:38:53.164157 18358 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0918 19:38:53.164111 18358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0918 19:38:53.164169 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164176 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164183 18358 addons.go:69] Setting volcano=true in profile "minikube"
I0918 19:38:53.164198 18358 addons.go:234] Setting addon volcano=true in "minikube"
I0918 19:38:53.164224 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164130 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164340 18358 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:38:53.164480 18358 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0918 19:38:53.164517 18358 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0918 19:38:53.164878 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.164893 18358 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0918 19:38:53.164898 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.164907 18358 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0918 19:38:53.164927 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164157 18358 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0918 19:38:53.164957 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164959 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.164971 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.165007 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164130 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.165266 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.165284 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.165313 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.165486 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.165500 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.165527 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.165528 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.165539 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.164138 18358 addons.go:69] Setting registry=true in profile "minikube"
I0918 19:38:53.165568 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.165583 18358 addons.go:234] Setting addon registry=true in "minikube"
I0918 19:38:53.165608 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.165692 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.165709 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.165748 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164934 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164173 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.166229 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.166246 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.166273 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164131 18358 addons.go:234] Setting addon helm-tiller=true in "minikube"
I0918 19:38:53.166315 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.166424 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.166443 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.166473 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164134 18358 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0918 19:38:53.166586 18358 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0918 19:38:53.166614 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.164878 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.166644 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.166677 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.166954 18358 out.go:177] * Configuring local host environment ...
I0918 19:38:53.164878 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.167105 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.167139 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.167230 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.167243 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.164880 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.167271 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.167272 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.167302 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.164878 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.167843 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.167902 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0918 19:38:53.168260 18358 out.go:270] *
W0918 19:38:53.168277 18358 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0918 19:38:53.168285 18358 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0918 19:38:53.168301 18358 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0918 19:38:53.168306 18358 out.go:270] *
W0918 19:38:53.168345 18358 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0918 19:38:53.168352 18358 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0918 19:38:53.168358 18358 out.go:270] *
W0918 19:38:53.168382 18358 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0918 19:38:53.168389 18358 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0918 19:38:53.168395 18358 out.go:270] *
W0918 19:38:53.168401 18358 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0918 19:38:53.168427 18358 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0918 19:38:53.170345 18358 out.go:177] * Verifying Kubernetes components...
I0918 19:38:53.172330 18358 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0918 19:38:53.185947 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.187676 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.187749 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.188034 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.188252 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.188278 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.188309 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.189050 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.191869 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.196693 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.204204 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.208651 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.208716 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.208662 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.208792 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.214379 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.218734 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.218992 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.219044 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.219258 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.219759 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.219791 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.226317 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.226332 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.226347 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.226383 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.226386 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.226394 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.230967 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.231021 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.232885 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.232911 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.234550 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.234610 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.234613 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.234629 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.234780 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.234834 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.235934 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.238710 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.239839 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.239862 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.244205 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.244228 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.244897 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.246147 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.247735 18358 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0918 19:38:53.247777 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.248423 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.248437 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.248469 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.250108 18358 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0918 19:38:53.250145 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:38:53.250736 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:38:53.250755 18358 api_server.go:166] Checking apiserver status ...
I0918 19:38:53.250786 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:38:53.260710 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.260767 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.264499 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.264528 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.265495 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.267709 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0918 19:38:53.269532 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.269685 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.269706 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.270697 18358 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0918 19:38:53.271527 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0918 19:38:53.271686 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.271901 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.271946 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.273348 18358 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0918 19:38:53.273383 18358 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0918 19:38:53.273509 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube834318975 /etc/kubernetes/addons/yakd-ns.yaml
I0918 19:38:53.273690 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.273703 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.274909 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.274933 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.277670 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.278848 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0918 19:38:53.279163 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.279565 18358 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0918 19:38:53.280803 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.281334 18358 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0918 19:38:53.281367 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0918 19:38:53.281499 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1696474448 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0918 19:38:53.281600 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0918 19:38:53.281686 18358 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0918 19:38:53.281931 18358 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0918 19:38:53.283088 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.283143 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.283219 18358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:38:53.283239 18358 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0918 19:38:53.283248 18358 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:38:53.283294 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:38:53.283900 18358 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0918 19:38:53.284006 18358 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0918 19:38:53.284130 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube351705620 /etc/kubernetes/addons/metrics-apiservice.yaml
I0918 19:38:53.284328 18358 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0918 19:38:53.284492 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.284899 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.285001 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.286977 18358 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0918 19:38:53.288020 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0918 19:38:53.288028 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.288100 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.292568 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.292653 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.294466 18358 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0918 19:38:53.296635 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.297932 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.297957 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.298039 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0918 19:38:53.298069 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0918 19:38:53.298247 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube410085531 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0918 19:38:53.301025 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.301045 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.301361 18358 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0918 19:38:53.301720 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0918 19:38:53.302117 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.302759 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0918 19:38:53.302794 18358 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0918 19:38:53.302904 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1213085578 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0918 19:38:53.303891 18358 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0918 19:38:53.304868 18358 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0918 19:38:53.304891 18358 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0918 19:38:53.305001 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3135139328 /etc/kubernetes/addons/ig-namespace.yaml
I0918 19:38:53.305754 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.305802 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.305994 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.306236 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:38:53.307207 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.307502 18358 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0918 19:38:53.308543 18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0918 19:38:53.308574 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0918 19:38:53.308692 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4240804950 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0918 19:38:53.308837 18358 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0918 19:38:53.310147 18358 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0918 19:38:53.310178 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0918 19:38:53.310319 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3747871862 /etc/kubernetes/addons/deployment.yaml
I0918 19:38:53.311558 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0918 19:38:53.311945 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0918 19:38:53.312084 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2254601592 /etc/kubernetes/addons/rbac-hostpath.yaml
I0918 19:38:53.312125 18358 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0918 19:38:53.312143 18358 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0918 19:38:53.312244 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3138600989 /etc/kubernetes/addons/yakd-sa.yaml
I0918 19:38:53.315399 18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0918 19:38:53.315429 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0918 19:38:53.315539 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2733771184 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0918 19:38:53.316513 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0918 19:38:53.316637 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2632347266 /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:38:53.321624 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.321652 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.322456 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.322506 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.323828 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.323850 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.327118 18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0918 19:38:53.327153 18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0918 19:38:53.327291 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3470057 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0918 19:38:53.327661 18358 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0918 19:38:53.327684 18358 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0918 19:38:53.327794 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1878468979 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0918 19:38:53.329528 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.329746 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.331462 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:38:53.331508 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:38:53.331971 18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0918 19:38:53.331990 18358 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0918 19:38:53.332089 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube125866827 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0918 19:38:53.334379 18358 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0918 19:38:53.334407 18358 out.go:177] - Using image docker.io/registry:2.8.3
I0918 19:38:53.334579 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0918 19:38:53.338522 18358 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0918 19:38:53.339635 18358 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0918 19:38:53.339664 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0918 19:38:53.339778 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1028062374 /etc/kubernetes/addons/registry-rc.yaml
I0918 19:38:53.339947 18358 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0918 19:38:53.340926 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0918 19:38:53.343316 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0918 19:38:53.343346 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0918 19:38:53.343442 18358 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0918 19:38:53.343652 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3364091950 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0918 19:38:53.346268 18358 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0918 19:38:53.346308 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0918 19:38:53.346836 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320522976 /etc/kubernetes/addons/volcano-deployment.yaml
I0918 19:38:53.347874 18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0918 19:38:53.347909 18358 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0918 19:38:53.348032 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1806307833 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0918 19:38:53.349514 18358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0918 19:38:53.349587 18358 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0918 19:38:53.350467 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4120198128 /etc/kubernetes/addons/metrics-server-service.yaml
I0918 19:38:53.350731 18358 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0918 19:38:53.350761 18358 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0918 19:38:53.350877 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3624673884 /etc/kubernetes/addons/yakd-crb.yaml
I0918 19:38:53.353384 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.353411 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.363527 18358 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0918 19:38:53.366435 18358 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0918 19:38:53.366604 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube123769422 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0918 19:38:53.369446 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.369507 18358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0918 19:38:53.369523 18358 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0918 19:38:53.369530 18358 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0918 19:38:53.369570 18358 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0918 19:38:53.371085 18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0918 19:38:53.371119 18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0918 19:38:53.371311 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube466298229 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0918 19:38:53.376161 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0918 19:38:53.378586 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:38:53.378613 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:38:53.378924 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0918 19:38:53.378948 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0918 19:38:53.379077 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube971339646 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0918 19:38:53.381215 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0918 19:38:53.384378 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:38:53.386720 18358 out.go:177] - Using image docker.io/busybox:stable
I0918 19:38:53.388048 18358 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0918 19:38:53.388510 18358 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0918 19:38:53.388533 18358 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0918 19:38:53.388649 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3633779932 /etc/kubernetes/addons/yakd-svc.yaml
I0918 19:38:53.389304 18358 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0918 19:38:53.389693 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0918 19:38:53.389971 18358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0918 19:38:53.390003 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0918 19:38:53.390122 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube963188202 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0918 19:38:53.396067 18358 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0918 19:38:53.396096 18358 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0918 19:38:53.396204 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2384143518 /etc/kubernetes/addons/registry-svc.yaml
I0918 19:38:53.399419 18358 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0918 19:38:53.399549 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3632654475 /etc/kubernetes/addons/storageclass.yaml
I0918 19:38:53.404692 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0918 19:38:53.404910 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0918 19:38:53.405081 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1543457254 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0918 19:38:53.410379 18358 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0918 19:38:53.410413 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0918 19:38:53.410531 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102482626 /etc/kubernetes/addons/registry-proxy.yaml
I0918 19:38:53.414877 18358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0918 19:38:53.414910 18358 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0918 19:38:53.415045 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube226649032 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0918 19:38:53.415380 18358 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0918 19:38:53.415408 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0918 19:38:53.415512 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2668014653 /etc/kubernetes/addons/yakd-dp.yaml
I0918 19:38:53.417147 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0918 19:38:53.422055 18358 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0918 19:38:53.422086 18358 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0918 19:38:53.422191 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3529073702 /etc/kubernetes/addons/ig-role.yaml
I0918 19:38:53.439289 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0918 19:38:53.439412 18358 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0918 19:38:53.439567 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3092662641 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0918 19:38:53.446509 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0918 19:38:53.446788 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0918 19:38:53.447942 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0918 19:38:53.468440 18358 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0918 19:38:53.468476 18358 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0918 19:38:53.468618 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1476691340 /etc/kubernetes/addons/ig-rolebinding.yaml
I0918 19:38:53.475044 18358 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0918 19:38:53.475080 18358 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0918 19:38:53.475220 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3940388329 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0918 19:38:53.500458 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0918 19:38:53.500500 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0918 19:38:53.500641 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1583852321 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0918 19:38:53.516511 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0918 19:38:53.516554 18358 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0918 19:38:53.516683 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2311948277 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0918 19:38:53.552898 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0918 19:38:53.552998 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0918 19:38:53.553237 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4230616940 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0918 19:38:53.581447 18358 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:38:53.581490 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0918 19:38:53.581687 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2802059548 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:38:53.595225 18358 exec_runner.go:51] Run: sudo systemctl start kubelet
I0918 19:38:53.613240 18358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0918 19:38:53.613282 18358 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0918 19:38:53.613419 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube132100541 /etc/kubernetes/addons/ig-clusterrole.yaml
I0918 19:38:53.628312 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0918 19:38:53.628348 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0918 19:38:53.628981 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube357447146 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0918 19:38:53.641344 18358 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0918 19:38:53.644243 18358 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0918 19:38:53.644267 18358 node_ready.go:38] duration metric: took 2.895093ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0918 19:38:53.644277 18358 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0918 19:38:53.660037 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:38:53.660599 18358 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:53.661840 18358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0918 19:38:53.661893 18358 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0918 19:38:53.662066 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3926256249 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0918 19:38:53.666470 18358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0918 19:38:53.666497 18358 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0918 19:38:53.666588 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2699613024 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0918 19:38:53.708457 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0918 19:38:53.784078 18358 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0918 19:38:53.784118 18358 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0918 19:38:53.784262 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3433676318 /etc/kubernetes/addons/ig-crd.yaml
I0918 19:38:53.843488 18358 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0918 19:38:53.843520 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0918 19:38:53.843655 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1481242130 /etc/kubernetes/addons/ig-daemonset.yaml
I0918 19:38:53.851362 18358 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0918 19:38:53.900598 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0918 19:38:54.157159 18358 addons.go:475] Verifying addon registry=true in "minikube"
I0918 19:38:54.159540 18358 out.go:177] * Verifying registry addon...
I0918 19:38:54.163861 18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0918 19:38:54.172019 18358 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0918 19:38:54.172041 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:54.358275 18358 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0918 19:38:54.359810 18358 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0918 19:38:54.475480 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.028894011s)
I0918 19:38:54.477821 18358 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0918 19:38:54.676865 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:54.819855 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.371869433s)
I0918 19:38:54.992329 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.091659081s)
I0918 19:38:55.177576 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:55.180020 18358 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0918 19:38:55.180044 18358 pod_ready.go:82] duration metric: took 1.519366013s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:55.180056 18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:55.438778 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.778674911s)
W0918 19:38:55.438813 18358 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0918 19:38:55.438840 18358 retry.go:31] will retry after 270.743625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0918 19:38:55.670370 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:55.710471 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0918 19:38:56.174498 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:56.524295 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.815780821s)
I0918 19:38:56.524336 18358 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0918 19:38:56.527887 18358 out.go:177] * Verifying csi-hostpath-driver addon...
I0918 19:38:56.530653 18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0918 19:38:56.541081 18358 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:38:56.541111 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:56.671800 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:56.686309 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.310110448s)
I0918 19:38:57.035623 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:57.168343 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:57.186532 18358 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0918 19:38:57.536105 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:57.667483 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:58.036210 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:58.167836 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:58.535556 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:58.668469 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:58.775758 18358 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.0652296s)
I0918 19:38:59.036007 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:59.168282 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:38:59.186316 18358 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0918 19:38:59.186334 18358 pod_ready.go:82] duration metric: took 4.006271162s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.186344 18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.190924 18358 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0918 19:38:59.190942 18358 pod_ready.go:82] duration metric: took 4.590676ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.190951 18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6rkhh" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.195211 18358 pod_ready.go:93] pod "kube-proxy-6rkhh" in "kube-system" namespace has status "Ready":"True"
I0918 19:38:59.195235 18358 pod_ready.go:82] duration metric: took 4.277487ms for pod "kube-proxy-6rkhh" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.195247 18358 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.199249 18358 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0918 19:38:59.199268 18358 pod_ready.go:82] duration metric: took 4.013479ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.199279 18358 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace to be "Ready" ...
I0918 19:38:59.537633 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:38:59.667359 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:00.036250 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:00.167943 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:00.257876 18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0918 19:39:00.258106 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3275120385 /var/lib/minikube/google_application_credentials.json
I0918 19:39:00.267885 18358 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0918 19:39:00.268004 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2008867218 /var/lib/minikube/google_cloud_project
I0918 19:39:00.279993 18358 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0918 19:39:00.280054 18358 host.go:66] Checking if "minikube" exists ...
I0918 19:39:00.280579 18358 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0918 19:39:00.280596 18358 api_server.go:166] Checking apiserver status ...
I0918 19:39:00.280627 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:39:00.297332 18358 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/19711/cgroup
I0918 19:39:00.308935 18358 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59"
I0918 19:39:00.308994 18358 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/0796b5b669ba3fef6753c4efc2a211d90a47737695aad3affcde2072aa077b59/freezer.state
I0918 19:39:00.317646 18358 api_server.go:204] freezer state: "THAWED"
I0918 19:39:00.317674 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:39:00.322680 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:39:00.322736 18358 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0918 19:39:00.343201 18358 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0918 19:39:00.345514 18358 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0918 19:39:00.367329 18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0918 19:39:00.367376 18358 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0918 19:39:00.367537 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1759379821 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0918 19:39:00.376199 18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0918 19:39:00.376228 18358 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0918 19:39:00.376347 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1096912487 /etc/kubernetes/addons/gcp-auth-service.yaml
I0918 19:39:00.385778 18358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0918 19:39:00.385813 18358 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0918 19:39:00.385932 18358 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3046175433 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0918 19:39:00.396557 18358 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0918 19:39:00.535426 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:00.723567 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:01.091397 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:01.148538 18358 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0918 19:39:01.150349 18358 out.go:177] * Verifying gcp-auth addon...
I0918 19:39:01.152830 18358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0918 19:39:01.191173 18358 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0918 19:39:01.191701 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:01.204040 18358 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace has status "Ready":"False"
I0918 19:39:01.535487 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:01.667399 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:02.035321 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:02.207043 18358 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace has status "Ready":"True"
I0918 19:39:02.207067 18358 pod_ready.go:82] duration metric: took 3.007779715s for pod "nvidia-device-plugin-daemonset-w5zgj" in "kube-system" namespace to be "Ready" ...
I0918 19:39:02.207081 18358 pod_ready.go:39] duration metric: took 8.56278401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0918 19:39:02.207101 18358 api_server.go:52] waiting for apiserver process to appear ...
I0918 19:39:02.207170 18358 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0918 19:39:02.224385 18358 api_server.go:72] duration metric: took 9.055927801s to wait for apiserver process to appear ...
I0918 19:39:02.224415 18358 api_server.go:88] waiting for apiserver healthz status ...
I0918 19:39:02.224444 18358 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0918 19:39:02.228465 18358 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0918 19:39:02.229432 18358 api_server.go:141] control plane version: v1.31.1
I0918 19:39:02.229457 18358 api_server.go:131] duration metric: took 5.033146ms to wait for apiserver health ...
I0918 19:39:02.229467 18358 system_pods.go:43] waiting for kube-system pods to appear ...
I0918 19:39:02.257707 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:02.261953 18358 system_pods.go:59] 17 kube-system pods found
I0918 19:39:02.261985 18358 system_pods.go:61] "coredns-7c65d6cfc9-zwccs" [63bc68cd-9f53-479a-a2a5-9336a0e5deaf] Running
I0918 19:39:02.261994 18358 system_pods.go:61] "csi-hostpath-attacher-0" [06c4e199-4378-4232-bde2-37607f7da00d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0918 19:39:02.262001 18358 system_pods.go:61] "csi-hostpath-resizer-0" [31579844-294c-4f81-aa77-f7b5a6b9db22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0918 19:39:02.262008 18358 system_pods.go:61] "csi-hostpathplugin-dqj8p" [4aaa885d-1682-4ea1-8104-44fca44ecc93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0918 19:39:02.262013 18358 system_pods.go:61] "etcd-ubuntu-20-agent-2" [473ef6bb-310b-4856-ba27-dc8195df0744] Running
I0918 19:39:02.262019 18358 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [e2fb2a1b-8b37-4761-b413-41976d61b1e8] Running
I0918 19:39:02.262024 18358 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [ee0e7890-eaa0-4cfe-9507-c0afa36eda0d] Running
I0918 19:39:02.262029 18358 system_pods.go:61] "kube-proxy-6rkhh" [9389a9dd-4c3b-4a80-8997-902aa16b27fd] Running
I0918 19:39:02.262033 18358 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [7b8e1543-8de7-45cc-a334-f0a39d7a83fe] Running
I0918 19:39:02.262040 18358 system_pods.go:61] "metrics-server-84c5f94fbc-7lhq7" [feb14068-ae2c-4ab6-8d0f-81ec97b305a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0918 19:39:02.262046 18358 system_pods.go:61] "nvidia-device-plugin-daemonset-w5zgj" [653ea08c-da5c-4557-8a4d-a3a9fd4d1000] Running
I0918 19:39:02.262067 18358 system_pods.go:61] "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0918 19:39:02.262075 18358 system_pods.go:61] "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0918 19:39:02.262081 18358 system_pods.go:61] "snapshot-controller-56fcc65765-75b46" [3a59cf10-8aa3-4471-9606-a07d8292c058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:02.262086 18358 system_pods.go:61] "snapshot-controller-56fcc65765-g5hms" [54e4458b-6513-488a-8b09-cd4b7c02e213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:02.262089 18358 system_pods.go:61] "storage-provisioner" [eed0a073-ffd8-4934-9367-a2e95f84bffd] Running
I0918 19:39:02.262094 18358 system_pods.go:61] "tiller-deploy-b48cc5f79-7zq4s" [abd0f145-1948-4210-a986-4dc65e777296] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0918 19:39:02.262098 18358 system_pods.go:74] duration metric: took 32.626772ms to wait for pod list to return data ...
I0918 19:39:02.262105 18358 default_sa.go:34] waiting for default service account to be created ...
I0918 19:39:02.264476 18358 default_sa.go:45] found service account: "default"
I0918 19:39:02.264496 18358 default_sa.go:55] duration metric: took 2.385201ms for default service account to be created ...
I0918 19:39:02.264506 18358 system_pods.go:116] waiting for k8s-apps to be running ...
I0918 19:39:02.272325 18358 system_pods.go:86] 17 kube-system pods found
I0918 19:39:02.272351 18358 system_pods.go:89] "coredns-7c65d6cfc9-zwccs" [63bc68cd-9f53-479a-a2a5-9336a0e5deaf] Running
I0918 19:39:02.272359 18358 system_pods.go:89] "csi-hostpath-attacher-0" [06c4e199-4378-4232-bde2-37607f7da00d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0918 19:39:02.272365 18358 system_pods.go:89] "csi-hostpath-resizer-0" [31579844-294c-4f81-aa77-f7b5a6b9db22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0918 19:39:02.272374 18358 system_pods.go:89] "csi-hostpathplugin-dqj8p" [4aaa885d-1682-4ea1-8104-44fca44ecc93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0918 19:39:02.272378 18358 system_pods.go:89] "etcd-ubuntu-20-agent-2" [473ef6bb-310b-4856-ba27-dc8195df0744] Running
I0918 19:39:02.272382 18358 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [e2fb2a1b-8b37-4761-b413-41976d61b1e8] Running
I0918 19:39:02.272389 18358 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [ee0e7890-eaa0-4cfe-9507-c0afa36eda0d] Running
I0918 19:39:02.272393 18358 system_pods.go:89] "kube-proxy-6rkhh" [9389a9dd-4c3b-4a80-8997-902aa16b27fd] Running
I0918 19:39:02.272397 18358 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [7b8e1543-8de7-45cc-a334-f0a39d7a83fe] Running
I0918 19:39:02.272408 18358 system_pods.go:89] "metrics-server-84c5f94fbc-7lhq7" [feb14068-ae2c-4ab6-8d0f-81ec97b305a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0918 19:39:02.272413 18358 system_pods.go:89] "nvidia-device-plugin-daemonset-w5zgj" [653ea08c-da5c-4557-8a4d-a3a9fd4d1000] Running
I0918 19:39:02.272425 18358 system_pods.go:89] "registry-66c9cd494c-pjkt7" [37c3d12e-c029-446f-ae1c-816691f53587] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0918 19:39:02.272439 18358 system_pods.go:89] "registry-proxy-sr6mh" [6a37092e-8132-4577-a7db-ae572e46da9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0918 19:39:02.272448 18358 system_pods.go:89] "snapshot-controller-56fcc65765-75b46" [3a59cf10-8aa3-4471-9606-a07d8292c058] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:02.272457 18358 system_pods.go:89] "snapshot-controller-56fcc65765-g5hms" [54e4458b-6513-488a-8b09-cd4b7c02e213] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0918 19:39:02.272462 18358 system_pods.go:89] "storage-provisioner" [eed0a073-ffd8-4934-9367-a2e95f84bffd] Running
I0918 19:39:02.272470 18358 system_pods.go:89] "tiller-deploy-b48cc5f79-7zq4s" [abd0f145-1948-4210-a986-4dc65e777296] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0918 19:39:02.272483 18358 system_pods.go:126] duration metric: took 7.970024ms to wait for k8s-apps to be running ...
I0918 19:39:02.272492 18358 system_svc.go:44] waiting for kubelet service to be running ....
I0918 19:39:02.272549 18358 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0918 19:39:02.287089 18358 system_svc.go:56] duration metric: took 14.585391ms WaitForService to wait for kubelet
I0918 19:39:02.287116 18358 kubeadm.go:582] duration metric: took 9.11866731s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0918 19:39:02.287134 18358 node_conditions.go:102] verifying NodePressure condition ...
I0918 19:39:02.384598 18358 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0918 19:39:02.384626 18358 node_conditions.go:123] node cpu capacity is 8
I0918 19:39:02.384636 18358 node_conditions.go:105] duration metric: took 97.497748ms to run NodePressure ...
I0918 19:39:02.384648 18358 start.go:241] waiting for startup goroutines ...
I0918 19:39:02.535925 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:02.667565 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:03.035275 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:03.166791 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:03.535532 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:03.666536 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:04.035777 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:04.258493 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:04.535117 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:04.667308 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:05.035812 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:05.167286 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:05.535723 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:05.668065 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:06.063251 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:06.167736 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:06.535701 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:06.667160 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:07.035096 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:07.185023 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:07.534607 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:07.666959 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:08.035806 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:08.166612 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:08.535534 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:08.667848 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:09.035202 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:09.167920 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:09.535587 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:09.667145 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:10.035247 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:10.167614 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:10.535653 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:10.756814 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:11.035171 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:11.167210 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:11.535709 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:11.666977 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:12.035834 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:12.167525 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:12.535852 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:12.667484 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:13.035575 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:13.167289 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:13.535743 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:13.666933 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:14.034784 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:14.167604 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:14.534821 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:14.666990 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0918 19:39:15.037185 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:15.167722 18358 kapi.go:107] duration metric: took 21.003861204s to wait for kubernetes.io/minikube-addons=registry ...
I0918 19:39:15.535194 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:16.036021 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:16.535855 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:17.036073 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:17.557372 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:18.034459 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:18.534437 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:19.035083 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:19.536191 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:20.035212 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:20.535023 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:21.034780 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:21.535299 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:22.036007 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:22.534777 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:23.034827 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:23.534999 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:24.034916 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:24.535403 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:25.035879 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:25.535197 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:26.035608 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:26.535560 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:27.035673 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:27.535090 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:28.036078 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:28.535379 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:29.035760 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:29.536665 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:30.035324 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:30.535621 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:31.036357 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:31.535765 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:32.035249 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:32.535159 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:33.035875 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:33.536487 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:34.036283 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:34.535515 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0918 19:39:35.034848 18358 kapi.go:107] duration metric: took 38.504195995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0918 19:39:42.656045 18358 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0918 19:39:42.656065 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:43.155921 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:43.656030 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:44.156242 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:44.655872 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:45.156089 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:45.656087 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:46.156350 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:46.656283 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:47.156308 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:47.657104 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:48.156318 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:48.656097 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:49.156264 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:49.656352 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:50.156525 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:50.656567 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:51.156634 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:51.656528 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:52.156859 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:52.655599 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:53.156772 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:53.656323 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:54.156646 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:54.655626 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:55.155945 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:55.656240 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:56.156423 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:56.656338 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:57.156271 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:57.656312 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:58.156250 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:58.656290 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:59.155956 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:39:59.656502 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:00.156784 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:00.655901 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:01.155754 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:01.655717 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:02.156023 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:02.655867 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:03.155941 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:03.656225 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:04.156413 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:04.656652 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:05.156196 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:05.662870 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:06.155473 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:06.656662 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:07.156710 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:07.657027 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:08.156176 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:08.656442 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:09.156235 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:09.656111 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:10.156877 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:10.656001 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:11.155849 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:11.656433 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:12.156797 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:12.655663 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:13.156811 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:13.655968 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:14.156741 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:14.655864 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:15.155915 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:15.656416 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:16.156314 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:16.656253 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:17.156024 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:17.656377 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:18.156490 18358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0918 19:40:18.656697 18358 kapi.go:107] duration metric: took 1m17.503865455s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0918 19:40:18.664543 18358 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0918 19:40:18.666565 18358 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0918 19:40:18.667968 18358 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0918 19:40:18.669492 18358 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, helm-tiller, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0918 19:40:18.671045 18358 addons.go:510] duration metric: took 1m25.507084849s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner helm-tiller metrics-server yakd storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0918 19:40:18.671094 18358 start.go:246] waiting for cluster config update ...
I0918 19:40:18.671118 18358 start.go:255] writing updated cluster config ...
I0918 19:40:18.671374 18358 exec_runner.go:51] Run: rm -f paused
I0918 19:40:18.716122 18358 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0918 19:40:18.718095 18358 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Wed 2024-09-18 19:50:10 UTC. --
Sep 18 19:42:26 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:26.559074445Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:42:26 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:26.561238542Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:42:36 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:42:36Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 18 19:42:37 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:42:37.945528863Z" level=info msg="ignoring event" container=d29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:43:49 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:43:49.553400477Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:43:49 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:43:49.555476210Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:45:29 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:29Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.770377046Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.770373683Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.772214529Z" level=error msg="Error running exec 69bbbb8be465456041fd8eae0028f658ee21aa187776c569bd680aba377bb139 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 18 19:45:30 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:45:30.966827221Z" level=info msg="ignoring event" container=46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:45:39 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:39Z" level=error msg="error getting RW layer size for container ID 'd29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb': Error response from daemon: No such container: d29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb"
Sep 18 19:45:39 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:45:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd29852014eeef11ed7cfdbb1a666fb5cd6ba83e2d3fea6b8d5e5477d5713e9fb'"
Sep 18 19:46:39 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:46:39.552793111Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:46:39 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:46:39.555242545Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 18 19:49:09 ubuntu-20-agent-2 cri-dockerd[18918]: time="2024-09-18T19:49:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86654dc69ca9aa697293059ac96a8c2cd9b26b151f2cbb8406753639513b5496/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 18 19:49:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:10.159211680Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:49:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:10.161362470Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:49:24 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:24.544960788Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:49:24 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:24.547232415Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:49:50 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:50.554501691Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:49:50 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:49:50.556769808Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 18 19:50:09 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:09.616945389Z" level=info msg="ignoring event" container=86654dc69ca9aa697293059ac96a8c2cd9b26b151f2cbb8406753639513b5496 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:50:09 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:09.951960399Z" level=info msg="ignoring event" container=eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 18 19:50:10 ubuntu-20-agent-2 dockerd[18590]: time="2024-09-18T19:50:10.129734524Z" level=info msg="ignoring event" container=e98390f2c2154890e22784075d34b1c5f37c489992b45cc13f276c014cc9c41f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
46dfa86d512c9 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 b8ec36877581d gadget-7tl86
9ac6d59915187 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 9ad8b668615e1 gcp-auth-89d5ffd79-xjxwx
501aace3f8d42 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 332f3edbae5db csi-hostpathplugin-dqj8p
96944db015815 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 332f3edbae5db csi-hostpathplugin-dqj8p
ad8bbe941a8f6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 332f3edbae5db csi-hostpathplugin-dqj8p
a4beae5b1d820 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 332f3edbae5db csi-hostpathplugin-dqj8p
dcefe7b0fe90c registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 332f3edbae5db csi-hostpathplugin-dqj8p
a2322603aa9b1 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 2c8d751e3ad08 csi-hostpath-resizer-0
2c39c3e33bbab registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 332f3edbae5db csi-hostpathplugin-dqj8p
6bbad19fd1f17 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 54559f02f1e87 csi-hostpath-attacher-0
0bb4e1ed88ac0 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 10015aa2d4402 snapshot-controller-56fcc65765-75b46
5bbd3b9f135cb registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 f42b014678821 snapshot-controller-56fcc65765-g5hms
44234102ecd81 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 c67dfe37e5cc1 local-path-provisioner-86d989889c-b5hqx
4744d7174f7c8 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 d10a64be3cebc yakd-dashboard-67d98fc6b-dbkgq
cfc2c868c8ecb registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 11 minutes ago Running metrics-server 0 0fda2223a9da5 metrics-server-84c5f94fbc-7lhq7
eeb81e732af94 ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 11 minutes ago Running tiller 0 4e18b9da7151d tiller-deploy-b48cc5f79-7zq4s
b402a83186826 registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 11 minutes ago Running registry 0 96b3410ec14c7 registry-66c9cd494c-pjkt7
2bf89b49875e7 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 4e3fe0f57bdff cloud-spanner-emulator-769b77f747-lvrwr
8676c3e1b5f13 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 6036eafc90f1e nvidia-device-plugin-daemonset-w5zgj
8ea02a517c77a c69fa2e9cbf5f 11 minutes ago Running coredns 0 97baaa5aa6969 coredns-7c65d6cfc9-zwccs
4cb614d6a3030 6e38f40d628db 11 minutes ago Running storage-provisioner 0 1434a351e9054 storage-provisioner
59fe8f563a56d 60c005f310ff3 11 minutes ago Running kube-proxy 0 d3a2b0d3c3234 kube-proxy-6rkhh
5b8067656dbe6 2e96e5913fc06 11 minutes ago Running etcd 0 f3c03b3d7053c etcd-ubuntu-20-agent-2
273b66fd77173 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 39d9065444c68 kube-controller-manager-ubuntu-20-agent-2
ff77f2ad8d100 9aa1fad941575 11 minutes ago Running kube-scheduler 0 54870f7d13a0f kube-scheduler-ubuntu-20-agent-2
0796b5b669ba3 6bab7719df100 11 minutes ago Running kube-apiserver 0 2f14dd667d96d kube-apiserver-ubuntu-20-agent-2
==> coredns [8ea02a517c77] <==
[INFO] 10.244.0.10:45518 - 56900 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094297s
[INFO] 10.244.0.10:33093 - 11528 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006762s
[INFO] 10.244.0.10:33093 - 59654 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000106465s
[INFO] 10.244.0.10:50614 - 65039 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000069348s
[INFO] 10.244.0.10:50614 - 11539 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000097546s
[INFO] 10.244.0.10:47318 - 65055 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066847s
[INFO] 10.244.0.10:47318 - 51040 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000106249s
[INFO] 10.244.0.10:49574 - 29013 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000064402s
[INFO] 10.244.0.10:49574 - 599 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000100027s
[INFO] 10.244.0.10:56714 - 3989 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000068373s
[INFO] 10.244.0.10:56714 - 7831 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000120937s
[INFO] 10.244.0.24:44725 - 50096 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000315148s
[INFO] 10.244.0.24:35225 - 49988 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000372084s
[INFO] 10.244.0.24:46601 - 24104 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098131s
[INFO] 10.244.0.24:48261 - 29753 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129907s
[INFO] 10.244.0.24:46048 - 4278 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124406s
[INFO] 10.244.0.24:36044 - 11132 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098333s
[INFO] 10.244.0.24:54692 - 33076 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003488481s
[INFO] 10.244.0.24:44533 - 39182 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.009063428s
[INFO] 10.244.0.24:36985 - 15796 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003410589s
[INFO] 10.244.0.24:39978 - 37997 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005602661s
[INFO] 10.244.0.24:52101 - 4071 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003358139s
[INFO] 10.244.0.24:40989 - 10653 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00447799s
[INFO] 10.244.0.24:55784 - 18915 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002570128s
[INFO] 10.244.0.24:52846 - 20464 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002606596s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_18T19_38_49_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 18 Sep 2024 19:38:46 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Wed, 18 Sep 2024 19:50:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 18 Sep 2024 19:45:57 +0000 Wed, 18 Sep 2024 19:38:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 18 Sep 2024 19:45:57 +0000 Wed, 18 Sep 2024 19:38:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 18 Sep 2024 19:45:57 +0000 Wed, 18 Sep 2024 19:38:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 18 Sep 2024 19:45:57 +0000 Wed, 18 Sep 2024 19:38:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: 31f8c253-41fe-46b0-a38a-68a1f8eb05d1
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (22 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-769b77f747-lvrwr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-7tl86 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-xjxwx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-zwccs 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-dqj8p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-6rkhh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-7lhq7 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-w5zgj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system registry-66c9cd494c-pjkt7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-75b46 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-g5hms 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system tiller-deploy-b48cc5f79-7zq4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-b5hqx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-dbkgq 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeHasSufficientMemory 11m (x8 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal Starting 11m kubelet Starting kubelet.
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a 90 81 4f 84 c3 08 06
[ +1.011345] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3c 64 58 26 a7 08 06
[ +0.023209] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 95 a5 d2 f1 f9 08 06
[ +2.793293] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 e2 95 18 93 53 08 06
[ +1.934893] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 3d f6 17 6e 9a 08 06
[ +4.120358] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 2a e4 f4 4b 02 af 08 06
[ +2.922409] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 7a 27 57 39 63 08 06
[ +0.518245] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 6b 91 d0 03 ee 08 06
[ +0.125285] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff de 91 89 a2 4c d3 08 06
[Sep18 19:40] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 ee 98 0e d8 9a 08 06
[ +0.027955] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 09 7e 3c f1 68 08 06
[ +12.014529] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff de 45 41 1b 27 1c 08 06
[ +0.000498] IPv4: martian source 10.244.0.24 from 10.244.0.5, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8a 30 d1 41 41 08 06
==> etcd [5b8067656dbe] <==
{"level":"info","ts":"2024-09-18T19:38:44.971810Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-18T19:38:44.971811Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-18T19:38:44.971833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-18T19:38:44.972153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-18T19:38:44.972177Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-18T19:38:44.972136Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:44.972273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:44.972306Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-18T19:38:44.972965Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-18T19:38:44.973025Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-18T19:38:44.973817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-18T19:38:44.973817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-18T19:39:00.841645Z","caller":"traceutil/trace.go:171","msg":"trace[1078095586] linearizableReadLoop","detail":"{readStateIndex:901; appliedIndex:899; }","duration":"117.920404ms","start":"2024-09-18T19:39:00.723707Z","end":"2024-09-18T19:39:00.841627Z","steps":["trace[1078095586] 'read index received' (duration: 59.547465ms)","trace[1078095586] 'applied index is now lower than readState.Index' (duration: 58.372416ms)"],"step_count":2}
{"level":"info","ts":"2024-09-18T19:39:00.841691Z","caller":"traceutil/trace.go:171","msg":"trace[1807728034] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"117.950289ms","start":"2024-09-18T19:39:00.723718Z","end":"2024-09-18T19:39:00.841668Z","steps":["trace[1807728034] 'process raft request' (duration: 117.870867ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-18T19:39:00.841824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.093742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/gcp-auth/gcp-auth\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2024-09-18T19:39:00.841894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.048169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ubuntu-20-agent-2\" ","response":"range_response_count:1 size:4457"}
{"level":"info","ts":"2024-09-18T19:39:00.841689Z","caller":"traceutil/trace.go:171","msg":"trace[1446476925] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"118.014347ms","start":"2024-09-18T19:39:00.723654Z","end":"2024-09-18T19:39:00.841668Z","steps":["trace[1446476925] 'process raft request' (duration: 59.64422ms)","trace[1446476925] 'compare' (duration: 58.185753ms)"],"step_count":2}
{"level":"info","ts":"2024-09-18T19:39:00.841922Z","caller":"traceutil/trace.go:171","msg":"trace[1793168772] range","detail":"{range_begin:/registry/minions/ubuntu-20-agent-2; range_end:; response_count:1; response_revision:882; }","duration":"118.079655ms","start":"2024-09-18T19:39:00.723834Z","end":"2024-09-18T19:39:00.841914Z","steps":["trace[1793168772] 'agreement among raft nodes before linearized reading' (duration: 117.971692ms)"],"step_count":1}
{"level":"info","ts":"2024-09-18T19:39:00.841904Z","caller":"traceutil/trace.go:171","msg":"trace[479895819] range","detail":"{range_begin:/registry/services/specs/gcp-auth/gcp-auth; range_end:; response_count:0; response_revision:882; }","duration":"118.189257ms","start":"2024-09-18T19:39:00.723703Z","end":"2024-09-18T19:39:00.841892Z","steps":["trace[479895819] 'agreement among raft nodes before linearized reading' (duration: 118.003245ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-18T19:39:00.841864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.415399ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-18T19:39:00.842076Z","caller":"traceutil/trace.go:171","msg":"trace[577211197] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:882; }","duration":"110.644169ms","start":"2024-09-18T19:39:00.731422Z","end":"2024-09-18T19:39:00.842066Z","steps":["trace[577211197] 'agreement among raft nodes before linearized reading' (duration: 110.399242ms)"],"step_count":1}
{"level":"info","ts":"2024-09-18T19:39:01.087506Z","caller":"traceutil/trace.go:171","msg":"trace[363523474] transaction","detail":"{read_only:false; response_revision:884; number_of_response:1; }","duration":"160.817748ms","start":"2024-09-18T19:39:00.926668Z","end":"2024-09-18T19:39:01.087486Z","steps":["trace[363523474] 'process raft request' (duration: 73.092748ms)","trace[363523474] 'compare' (duration: 87.583886ms)"],"step_count":2}
{"level":"info","ts":"2024-09-18T19:48:45.113295Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1757}
{"level":"info","ts":"2024-09-18T19:48:45.138171Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1757,"took":"24.392651ms","hash":173522336,"current-db-size-bytes":8388608,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4485120,"current-db-size-in-use":"4.5 MB"}
{"level":"info","ts":"2024-09-18T19:48:45.138241Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":173522336,"revision":1757,"compact-revision":-1}
==> gcp-auth [9ac6d5991518] <==
2024/09/18 19:40:17 GCP Auth Webhook started!
2024/09/18 19:40:33 Ready to marshal response ...
2024/09/18 19:40:33 Ready to write response ...
2024/09/18 19:40:34 Ready to marshal response ...
2024/09/18 19:40:34 Ready to write response ...
2024/09/18 19:40:57 Ready to marshal response ...
2024/09/18 19:40:57 Ready to write response ...
2024/09/18 19:40:57 Ready to marshal response ...
2024/09/18 19:40:57 Ready to write response ...
2024/09/18 19:40:57 Ready to marshal response ...
2024/09/18 19:40:57 Ready to write response ...
2024/09/18 19:49:09 Ready to marshal response ...
2024/09/18 19:49:09 Ready to write response ...
==> kernel <==
19:50:10 up 32 min, 0 users, load average: 0.20, 0.31, 0.26
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [0796b5b669ba] <==
W0918 19:39:37.543009 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.213.195:443: connect: connection refused
W0918 19:39:42.146508 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
E0918 19:39:42.146546 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
W0918 19:40:04.169852 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
E0918 19:40:04.169897 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
W0918 19:40:04.187487 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.146.207:443: connect: connection refused
E0918 19:40:04.187529 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.146.207:443: connect: connection refused" logger="UnhandledError"
I0918 19:40:33.974555 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0918 19:40:33.992086 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0918 19:40:47.347190 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0918 19:40:47.356182 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0918 19:40:47.476980 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0918 19:40:47.478011 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0918 19:40:47.489930 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0918 19:40:47.644005 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0918 19:40:47.652613 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0918 19:40:47.656354 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0918 19:40:47.679218 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0918 19:40:48.494772 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0918 19:40:48.509680 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0918 19:40:48.671487 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0918 19:40:48.671479 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0918 19:40:48.680296 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0918 19:40:48.741108 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0918 19:40:48.873665 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [273b66fd7717] <==
W0918 19:48:47.969622 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:48:47.969661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:48:53.321844 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:48:53.321886 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:48:53.972967 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:48:53.973016 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:09.767195 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:09.767238 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:24.274592 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:24.274634 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:28.822788 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:28.822833 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:32.200151 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:32.200193 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:33.003296 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:33.003343 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:39.858851 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:39.858898 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:41.573555 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:41.573598 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:49:51.512526 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:49:51.512565 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0918 19:50:09.584235 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0918 19:50:09.584282 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0918 19:50:09.851684 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.009µs"
==> kube-proxy [59fe8f563a56] <==
I0918 19:38:55.486418 1 server_linux.go:66] "Using iptables proxy"
I0918 19:38:55.661914 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0918 19:38:55.661986 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0918 19:38:55.715923 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0918 19:38:55.716034 1 server_linux.go:169] "Using iptables Proxier"
I0918 19:38:55.719679 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0918 19:38:55.720543 1 server.go:483] "Version info" version="v1.31.1"
I0918 19:38:55.720680 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0918 19:38:55.722394 1 config.go:105] "Starting endpoint slice config controller"
I0918 19:38:55.722538 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0918 19:38:55.722545 1 config.go:199] "Starting service config controller"
I0918 19:38:55.722679 1 shared_informer.go:313] Waiting for caches to sync for service config
I0918 19:38:55.722916 1 config.go:328] "Starting node config controller"
I0918 19:38:55.723062 1 shared_informer.go:313] Waiting for caches to sync for node config
I0918 19:38:55.823613 1 shared_informer.go:320] Caches are synced for node config
I0918 19:38:55.823760 1 shared_informer.go:320] Caches are synced for service config
I0918 19:38:55.823805 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [ff77f2ad8d10] <==
W0918 19:38:45.976872 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:45.976894 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:45.976993 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0918 19:38:45.977021 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:45.977130 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:45.977158 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:46.813380 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:46.813417 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:46.815256 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0918 19:38:46.815283 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:46.882030 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0918 19:38:46.882076 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:46.882841 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0918 19:38:46.882869 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0918 19:38:46.967051 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0918 19:38:46.967099 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0918 19:38:47.136295 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0918 19:38:47.136331 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:47.154646 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0918 19:38:47.154693 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0918 19:38:47.172340 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0918 19:38:47.172378 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0918 19:38:47.233776 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0918 19:38:47.233834 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0918 19:38:48.872916 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Mon 2024-08-05 23:30:02 UTC, end at Wed 2024-09-18 19:50:10 UTC. --
Sep 18 19:49:48 ubuntu-20-agent-2 kubelet[19854]: I0918 19:49:48.409768 19854 scope.go:117] "RemoveContainer" containerID="46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5"
Sep 18 19:49:48 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:48.409952 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7tl86_gadget(44c6fa29-2386-4528-a289-3494a21ed93b)\"" pod="gadget/gadget-7tl86" podUID="44c6fa29-2386-4528-a289-3494a21ed93b"
Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.557326 19854 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.557502 19854 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tt7m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(8b8cf472-1baf-46b6-9123-b83cb79d18b7): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
Sep 18 19:49:50 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:50.558661 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="8b8cf472-1baf-46b6-9123-b83cb79d18b7"
Sep 18 19:49:56 ubuntu-20-agent-2 kubelet[19854]: E0918 19:49:56.408140 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5fe4de29-d893-4ada-954b-8bfaa1ad485a"
Sep 18 19:50:02 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:02.407367 19854 scope.go:117] "RemoveContainer" containerID="46dfa86d512c9c664e0ebb0a672d157fe919a288930db5086acec8e1069ecfd5"
Sep 18 19:50:02 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:02.407536 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7tl86_gadget(44c6fa29-2386-4528-a289-3494a21ed93b)\"" pod="gadget/gadget-7tl86" podUID="44c6fa29-2386-4528-a289-3494a21ed93b"
Sep 18 19:50:05 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:05.408450 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="8b8cf472-1baf-46b6-9123-b83cb79d18b7"
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:09.408411 19854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5fe4de29-d893-4ada-954b-8bfaa1ad485a"
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790810 19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt7m5\" (UniqueName: \"kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5\") pod \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\" (UID: \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\") "
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790877 19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds\") pod \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\" (UID: \"8b8cf472-1baf-46b6-9123-b83cb79d18b7\") "
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.790987 19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8b8cf472-1baf-46b6-9123-b83cb79d18b7" (UID: "8b8cf472-1baf-46b6-9123-b83cb79d18b7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.792986 19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5" (OuterVolumeSpecName: "kube-api-access-tt7m5") pod "8b8cf472-1baf-46b6-9123-b83cb79d18b7" (UID: "8b8cf472-1baf-46b6-9123-b83cb79d18b7"). InnerVolumeSpecName "kube-api-access-tt7m5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.891323 19854 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b8cf472-1baf-46b6-9123-b83cb79d18b7-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 18 19:50:09 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:09.891353 19854 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tt7m5\" (UniqueName: \"kubernetes.io/projected/8b8cf472-1baf-46b6-9123-b83cb79d18b7-kube-api-access-tt7m5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.294171 19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plncf\" (UniqueName: \"kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf\") pod \"6a37092e-8132-4577-a7db-ae572e46da9c\" (UID: \"6a37092e-8132-4577-a7db-ae572e46da9c\") "
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.296453 19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf" (OuterVolumeSpecName: "kube-api-access-plncf") pod "6a37092e-8132-4577-a7db-ae572e46da9c" (UID: "6a37092e-8132-4577-a7db-ae572e46da9c"). InnerVolumeSpecName "kube-api-access-plncf". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.394955 19854 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-plncf\" (UniqueName: \"kubernetes.io/projected/6a37092e-8132-4577-a7db-ae572e46da9c-kube-api-access-plncf\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.422430 19854 scope.go:117] "RemoveContainer" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.439528 19854 scope.go:117] "RemoveContainer" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: E0918 19:50:10.440478 19854 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4" containerID="eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.440524 19854 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"} err="failed to get container status \"eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4\": rpc error: code = Unknown desc = Error response from daemon: No such container: eae0413aa6b6eeac0cb499a412e7915fb8bae2030b2611ee37612d3b37951aa4"
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.696591 19854 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6wzs\" (UniqueName: \"kubernetes.io/projected/37c3d12e-c029-446f-ae1c-816691f53587-kube-api-access-j6wzs\") pod \"37c3d12e-c029-446f-ae1c-816691f53587\" (UID: \"37c3d12e-c029-446f-ae1c-816691f53587\") "
Sep 18 19:50:10 ubuntu-20-agent-2 kubelet[19854]: I0918 19:50:10.698520 19854 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c3d12e-c029-446f-ae1c-816691f53587-kube-api-access-j6wzs" (OuterVolumeSpecName: "kube-api-access-j6wzs") pod "37c3d12e-c029-446f-ae1c-816691f53587" (UID: "37c3d12e-c029-446f-ae1c-816691f53587"). InnerVolumeSpecName "kube-api-access-j6wzs". PluginName "kubernetes.io/projected", VolumeGidValue ""
==> storage-provisioner [4cb614d6a303] <==
I0918 19:38:55.641900 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0918 19:38:55.654563 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0918 19:38:55.654606 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0918 19:38:55.662092 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0918 19:38:55.663286 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdcc2797-2d65-4590-b30e-fc94f03bac3b", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1 became leader
I0918 19:38:55.663494 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1!
I0918 19:38:55.764572 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_84cd3cab-f4fa-4515-b9c4-636d9499dcd1!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-66c9cd494c-pjkt7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7: exit status 1 (67.817187ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Wed, 18 Sep 2024 19:40:57 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.26
IPs:
IP: 10.244.0.26
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5bt7 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k5bt7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m45s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m45s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m45s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m29s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m10s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "registry-66c9cd494c-pjkt7" not found
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod busybox registry-66c9cd494c-pjkt7: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.81s)