=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.63479ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003939642s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003172952s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.088587714s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/20 16:56:19 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:40853 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:44 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:46 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 20 Sep 24 16:46 UTC | 20 Sep 24 16:47 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 16:44:46
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 16:44:46.393717 19594 out.go:345] Setting OutFile to fd 1 ...
I0920 16:44:46.393941 19594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:46.393949 19594 out.go:358] Setting ErrFile to fd 2...
I0920 16:44:46.393953 19594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:44:46.394129 19594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
I0920 16:44:46.394678 19594 out.go:352] Setting JSON to false
I0920 16:44:46.395479 19594 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1638,"bootTime":1726849048,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0920 16:44:46.395576 19594 start.go:139] virtualization: kvm guest
I0920 16:44:46.397621 19594 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0920 16:44:46.398910 19594 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
I0920 16:44:46.398952 19594 notify.go:220] Checking for updates...
I0920 16:44:46.398954 19594 out.go:177] - MINIKUBE_LOCATION=19672
I0920 16:44:46.400353 19594 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 16:44:46.401699 19594 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
I0920 16:44:46.402894 19594 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
I0920 16:44:46.404229 19594 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 16:44:46.405433 19594 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 16:44:46.406640 19594 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 16:44:46.416317 19594 out.go:177] * Using the none driver based on user configuration
I0920 16:44:46.417622 19594 start.go:297] selected driver: none
I0920 16:44:46.417633 19594 start.go:901] validating driver "none" against <nil>
I0920 16:44:46.417643 19594 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 16:44:46.417665 19594 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0920 16:44:46.417942 19594 out.go:270] ! The 'none' driver does not respect the --memory flag
I0920 16:44:46.418410 19594 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 16:44:46.418612 19594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:44:46.418636 19594 cni.go:84] Creating CNI manager for ""
I0920 16:44:46.418686 19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:46.418693 19594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 16:44:46.418741 19594 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:46.420382 19594 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0920 16:44:46.421858 19594 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json ...
I0920 16:44:46.421885 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json: {Name:mkdf036dff907fb437264bef45587df8a3fa5ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:46.422000 19594 start.go:360] acquireMachinesLock for minikube: {Name:mkdc49cc563151f6fcc0b1f78bca5c30c862e88d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 16:44:46.422031 19594 start.go:364] duration metric: took 18.715µs to acquireMachinesLock for "minikube"
I0920 16:44:46.422047 19594 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 16:44:46.422126 19594 start.go:125] createHost starting for "" (driver="none")
I0920 16:44:46.423597 19594 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0920 16:44:46.424767 19594 exec_runner.go:51] Run: systemctl --version
I0920 16:44:46.427141 19594 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0920 16:44:46.427179 19594 client.go:168] LocalClient.Create starting
I0920 16:44:46.427263 19594 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca.pem
I0920 16:44:46.427310 19594 main.go:141] libmachine: Decoding PEM data...
I0920 16:44:46.427327 19594 main.go:141] libmachine: Parsing certificate...
I0920 16:44:46.427381 19594 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8660/.minikube/certs/cert.pem
I0920 16:44:46.427411 19594 main.go:141] libmachine: Decoding PEM data...
I0920 16:44:46.427426 19594 main.go:141] libmachine: Parsing certificate...
I0920 16:44:46.427751 19594 client.go:171] duration metric: took 560.863µs to LocalClient.Create
I0920 16:44:46.427772 19594 start.go:167] duration metric: took 632.689µs to libmachine.API.Create "minikube"
I0920 16:44:46.427778 19594 start.go:293] postStartSetup for "minikube" (driver="none")
I0920 16:44:46.427827 19594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 16:44:46.427862 19594 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 16:44:46.436479 19594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0920 16:44:46.436498 19594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0920 16:44:46.436506 19594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0920 16:44:46.438279 19594 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0920 16:44:46.439554 19594 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8660/.minikube/addons for local assets ...
I0920 16:44:46.439621 19594 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8660/.minikube/files for local assets ...
I0920 16:44:46.439647 19594 start.go:296] duration metric: took 11.862163ms for postStartSetup
I0920 16:44:46.440229 19594 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json ...
I0920 16:44:46.440373 19594 start.go:128] duration metric: took 18.237035ms to createHost
I0920 16:44:46.440390 19594 start.go:83] releasing machines lock for "minikube", held for 18.348412ms
I0920 16:44:46.440844 19594 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0920 16:44:46.440938 19594 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0920 16:44:46.443952 19594 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0920 16:44:46.444001 19594 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 16:44:46.452933 19594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0920 16:44:46.452952 19594 start.go:495] detecting cgroup driver to use...
I0920 16:44:46.452977 19594 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 16:44:46.453070 19594 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 16:44:46.471937 19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0920 16:44:46.480822 19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0920 16:44:46.489284 19594 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0920 16:44:46.489332 19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0920 16:44:46.498803 19594 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 16:44:46.507255 19594 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0920 16:44:46.515367 19594 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 16:44:46.523540 19594 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 16:44:46.532138 19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0920 16:44:46.541201 19594 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0920 16:44:46.550322 19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0920 16:44:46.559318 19594 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 16:44:46.566350 19594 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 16:44:46.573296 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:44:46.788249 19594 exec_runner.go:51] Run: sudo systemctl restart containerd
I0920 16:44:46.854163 19594 start.go:495] detecting cgroup driver to use...
I0920 16:44:46.854283 19594 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 16:44:46.854408 19594 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 16:44:46.872665 19594 exec_runner.go:51] Run: which cri-dockerd
I0920 16:44:46.873538 19594 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0920 16:44:46.881174 19594 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0920 16:44:46.881196 19594 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 16:44:46.881225 19594 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 16:44:46.888485 19594 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0920 16:44:46.888637 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1499762060 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 16:44:46.897036 19594 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0920 16:44:47.108294 19594 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0920 16:44:47.337306 19594 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0920 16:44:47.337454 19594 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0920 16:44:47.337468 19594 exec_runner.go:203] rm: /etc/docker/daemon.json
I0920 16:44:47.337513 19594 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0920 16:44:47.346330 19594 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0920 16:44:47.346453 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1209004492 /etc/docker/daemon.json
I0920 16:44:47.354106 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:44:47.570786 19594 exec_runner.go:51] Run: sudo systemctl restart docker
I0920 16:44:47.868640 19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0920 16:44:47.879912 19594 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0920 16:44:47.896120 19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 16:44:47.908291 19594 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0920 16:44:48.118734 19594 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0920 16:44:48.337162 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:44:48.561029 19594 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0920 16:44:48.574784 19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 16:44:48.585981 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:44:48.783614 19594 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0920 16:44:48.849439 19594 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0920 16:44:48.849492 19594 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0920 16:44:48.850908 19594 start.go:563] Will wait 60s for crictl version
I0920 16:44:48.850940 19594 exec_runner.go:51] Run: which crictl
I0920 16:44:48.851766 19594 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0920 16:44:48.879685 19594 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.0
RuntimeApiVersion: v1
I0920 16:44:48.879738 19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 16:44:48.900589 19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 16:44:48.923554 19594 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
I0920 16:44:48.923627 19594 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0920 16:44:48.926355 19594 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0920 16:44:48.927659 19594 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 16:44:48.927757 19594 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:44:48.927766 19594 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0920 16:44:48.927844 19594 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0920 16:44:48.927885 19594 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0920 16:44:48.976977 19594 cni.go:84] Creating CNI manager for ""
I0920 16:44:48.976999 19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:48.977009 19594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 16:44:48.977029 19594 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 16:44:48.977150 19594 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 16:44:48.977202 19594 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 16:44:48.986493 19594 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0920 16:44:48.986539 19594 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0920 16:44:48.995061 19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0920 16:44:48.995115 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0920 16:44:48.995061 19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0920 16:44:48.995186 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0920 16:44:48.995059 19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0920 16:44:48.995350 19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0920 16:44:49.008481 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0920 16:44:49.045054 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1821831228 /var/lib/minikube/binaries/v1.31.1/kubectl
I0920 16:44:49.047377 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube546789139 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0920 16:44:49.077918 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube30309066 /var/lib/minikube/binaries/v1.31.1/kubelet
I0920 16:44:49.142104 19594 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 16:44:49.150392 19594 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0920 16:44:49.150412 19594 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 16:44:49.150444 19594 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 16:44:49.158035 19594 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0920 16:44:49.158155 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3884025451 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 16:44:49.165647 19594 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0920 16:44:49.165668 19594 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0920 16:44:49.165700 19594 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0920 16:44:49.172961 19594 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 16:44:49.173089 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4235946222 /lib/systemd/system/kubelet.service
I0920 16:44:49.180581 19594 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0920 16:44:49.180752 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4213183433 /var/tmp/minikube/kubeadm.yaml.new
I0920 16:44:49.188521 19594 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0920 16:44:49.189861 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:44:49.408391 19594 exec_runner.go:51] Run: sudo systemctl start kubelet
I0920 16:44:49.422235 19594 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube for IP: 10.138.0.48
I0920 16:44:49.422253 19594 certs.go:194] generating shared ca certs ...
I0920 16:44:49.422270 19594 certs.go:226] acquiring lock for ca certs: {Name:mk1d8899ce2a87028cac7a49ff26964e9bc72225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:49.422384 19594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.key
I0920 16:44:49.422423 19594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.key
I0920 16:44:49.422433 19594 certs.go:256] generating profile certs ...
I0920 16:44:49.422481 19594 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key
I0920 16:44:49.422494 19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt with IP's: []
I0920 16:44:49.875761 19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt ...
I0920 16:44:49.875789 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt: {Name:mk7612666dff1775ca3525ead0c65436e5c520d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:49.875930 19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key ...
I0920 16:44:49.875940 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key: {Name:mk449a238ae36687c25ac2321fcfdc974bee5fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:49.876004 19594 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a
I0920 16:44:49.876019 19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0920 16:44:49.982701 19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0920 16:44:49.982729 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk45d79ab0143733d8a3776acb94f00bb45ef4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:49.982849 19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0920 16:44:49.982858 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk353fb02d782506ee48ad3d8d88d8ea9ab1cfcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:49.982908 19594 certs.go:381] copying /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt
I0920 16:44:49.982979 19594 certs.go:385] copying /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key
I0920 16:44:49.983030 19594 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key
I0920 16:44:49.983043 19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0920 16:44:50.115007 19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt ...
I0920 16:44:50.115037 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcda2f3effe768f01706934a20a050b37960bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.115160 19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key ...
I0920 16:44:50.115169 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key: {Name:mk2ed70961549e0b26ca0fe6a6bc0e06bcde52c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:44:50.115306 19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca-key.pem (1679 bytes)
I0920 16:44:50.115336 19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca.pem (1078 bytes)
I0920 16:44:50.115359 19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/cert.pem (1123 bytes)
I0920 16:44:50.115380 19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/key.pem (1679 bytes)
I0920 16:44:50.115958 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 16:44:50.116084 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3122914844 /var/lib/minikube/certs/ca.crt
I0920 16:44:50.124372 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0920 16:44:50.124473 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1461209255 /var/lib/minikube/certs/ca.key
I0920 16:44:50.132113 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 16:44:50.132206 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2552536999 /var/lib/minikube/certs/proxy-client-ca.crt
I0920 16:44:50.139860 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0920 16:44:50.139953 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1928580747 /var/lib/minikube/certs/proxy-client-ca.key
I0920 16:44:50.148101 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0920 16:44:50.148210 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube11191036 /var/lib/minikube/certs/apiserver.crt
I0920 16:44:50.157255 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0920 16:44:50.157351 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3128518087 /var/lib/minikube/certs/apiserver.key
I0920 16:44:50.164291 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 16:44:50.164387 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2355093440 /var/lib/minikube/certs/proxy-client.crt
I0920 16:44:50.171503 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0920 16:44:50.171610 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726170504 /var/lib/minikube/certs/proxy-client.key
I0920 16:44:50.179676 19594 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0920 16:44:50.179691 19594 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.179717 19594 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.188474 19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 16:44:50.188607 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube734945398 /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.196039 19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 16:44:50.196146 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000014679 /var/lib/minikube/kubeconfig
I0920 16:44:50.203793 19594 exec_runner.go:51] Run: openssl version
I0920 16:44:50.206540 19594 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 16:44:50.214494 19594 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.215714 19594 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.215752 19594 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 16:44:50.218612 19594 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 16:44:50.226881 19594 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 16:44:50.227906 19594 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 16:44:50.227939 19594 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 16:44:50.228089 19594 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0920 16:44:50.243134 19594 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 16:44:50.251137 19594 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 16:44:50.258315 19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 16:44:50.279098 19594 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 16:44:50.288132 19594 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 16:44:50.288150 19594 kubeadm.go:157] found existing configuration files:
I0920 16:44:50.288196 19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 16:44:50.296418 19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 16:44:50.296482 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 16:44:50.303793 19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 16:44:50.311104 19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 16:44:50.311154 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 16:44:50.318081 19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 16:44:50.325186 19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 16:44:50.325226 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 16:44:50.332835 19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 16:44:50.340494 19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 16:44:50.340541 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 16:44:50.347202 19594 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0920 16:44:50.377240 19594 kubeadm.go:310] W0920 16:44:50.377143 20482 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:44:50.377745 19594 kubeadm.go:310] W0920 16:44:50.377700 20482 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 16:44:50.379209 19594 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 16:44:50.379250 19594 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 16:44:50.466808 19594 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 16:44:50.466918 19594 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 16:44:50.466930 19594 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 16:44:50.466935 19594 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 16:44:50.476447 19594 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 16:44:50.479002 19594 out.go:235] - Generating certificates and keys ...
I0920 16:44:50.479047 19594 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 16:44:50.479083 19594 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 16:44:50.769741 19594 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 16:44:50.960520 19594 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 16:44:51.096069 19594 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 16:44:51.202338 19594 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 16:44:51.345509 19594 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 16:44:51.345590 19594 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0920 16:44:51.427893 19594 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 16:44:51.428064 19594 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0920 16:44:51.574477 19594 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 16:44:51.780289 19594 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 16:44:51.834279 19594 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 16:44:51.834403 19594 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 16:44:51.980919 19594 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 16:44:52.062315 19594 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 16:44:52.298212 19594 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 16:44:52.403734 19594 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 16:44:52.793690 19594 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 16:44:52.794265 19594 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 16:44:52.796467 19594 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 16:44:52.798456 19594 out.go:235] - Booting up control plane ...
I0920 16:44:52.798485 19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 16:44:52.798506 19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 16:44:52.798918 19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 16:44:52.820623 19594 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 16:44:52.824924 19594 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 16:44:52.824949 19594 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 16:44:53.034858 19594 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 16:44:53.034884 19594 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 16:44:53.536372 19594 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.523924ms
I0920 16:44:53.536398 19594 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 16:44:57.538376 19594 kubeadm.go:310] [api-check] The API server is healthy after 4.001971734s
I0920 16:44:57.549191 19594 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 16:44:57.558265 19594 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 16:44:57.573022 19594 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 16:44:57.573048 19594 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 16:44:57.579399 19594 kubeadm.go:310] [bootstrap-token] Using token: hugp6m.tyvuqbgbnvgnovg0
I0920 16:44:57.580836 19594 out.go:235] - Configuring RBAC rules ...
I0920 16:44:57.580860 19594 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 16:44:57.583505 19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 16:44:57.588379 19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 16:44:57.590589 19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 16:44:57.592844 19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 16:44:57.595958 19594 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 16:44:57.944268 19594 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 16:44:58.364598 19594 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 16:44:58.943744 19594 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 16:44:58.944723 19594 kubeadm.go:310]
I0920 16:44:58.944748 19594 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 16:44:58.944753 19594 kubeadm.go:310]
I0920 16:44:58.944757 19594 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 16:44:58.944761 19594 kubeadm.go:310]
I0920 16:44:58.944764 19594 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 16:44:58.944768 19594 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 16:44:58.944771 19594 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 16:44:58.944775 19594 kubeadm.go:310]
I0920 16:44:58.944778 19594 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 16:44:58.944782 19594 kubeadm.go:310]
I0920 16:44:58.944785 19594 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 16:44:58.944789 19594 kubeadm.go:310]
I0920 16:44:58.944793 19594 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 16:44:58.944797 19594 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 16:44:58.944800 19594 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 16:44:58.944804 19594 kubeadm.go:310]
I0920 16:44:58.944807 19594 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 16:44:58.944811 19594 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 16:44:58.944814 19594 kubeadm.go:310]
I0920 16:44:58.944817 19594 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hugp6m.tyvuqbgbnvgnovg0 \
I0920 16:44:58.944822 19594 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fa5ac8eb105ac186d25174573bbce63b062ef4a25f52bd5bc8e84536a951a851 \
I0920 16:44:58.944825 19594 kubeadm.go:310] --control-plane
I0920 16:44:58.944829 19594 kubeadm.go:310]
I0920 16:44:58.944833 19594 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 16:44:58.944837 19594 kubeadm.go:310]
I0920 16:44:58.944841 19594 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hugp6m.tyvuqbgbnvgnovg0 \
I0920 16:44:58.944845 19594 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fa5ac8eb105ac186d25174573bbce63b062ef4a25f52bd5bc8e84536a951a851
I0920 16:44:58.947708 19594 cni.go:84] Creating CNI manager for ""
I0920 16:44:58.947736 19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 16:44:58.949444 19594 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 16:44:58.950712 19594 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0920 16:44:58.961932 19594 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 16:44:58.962056 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube830594341 /etc/cni/net.d/1-k8s.conflist
I0920 16:44:58.971138 19594 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 16:44:58.971219 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:58.971249 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T16_44_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0920 16:44:58.980373 19594 ops.go:34] apiserver oom_adj: -16
I0920 16:44:59.045655 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:44:59.545788 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:00.046704 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:00.545700 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:01.046153 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:01.546306 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:02.046406 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:02.546457 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:03.046102 19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 16:45:03.106560 19594 kubeadm.go:1113] duration metric: took 4.135387262s to wait for elevateKubeSystemPrivileges
I0920 16:45:03.106596 19594 kubeadm.go:394] duration metric: took 12.878657753s to StartCluster
I0920 16:45:03.106619 19594 settings.go:142] acquiring lock: {Name:mk6ada6352ea5bdecb1c79df6ac47b0dadd41593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:45:03.106684 19594 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19672-8660/kubeconfig
I0920 16:45:03.107232 19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/kubeconfig: {Name:mk3d4a06a73fedada4259eb022305dcbcccbad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 16:45:03.107438 19594 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 16:45:03.107511 19594 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 16:45:03.107636 19594 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0920 16:45:03.107648 19594 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0920 16:45:03.107656 19594 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0920 16:45:03.107666 19594 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0920 16:45:03.107672 19594 addons.go:69] Setting metrics-server=true in profile "minikube"
I0920 16:45:03.107686 19594 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0920 16:45:03.107693 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.107696 19594 addons.go:234] Setting addon metrics-server=true in "minikube"
I0920 16:45:03.107700 19594 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0920 16:45:03.107710 19594 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:45:03.107725 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.107728 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.107753 19594 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0920 16:45:03.107767 19594 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0920 16:45:03.107774 19594 addons.go:69] Setting registry=true in profile "minikube"
I0920 16:45:03.107788 19594 addons.go:234] Setting addon registry=true in "minikube"
I0920 16:45:03.107793 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.107812 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.108060 19594 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0920 16:45:03.108116 19594 mustload.go:65] Loading cluster: minikube
I0920 16:45:03.107657 19594 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0920 16:45:03.108329 19594 addons.go:69] Setting volcano=true in profile "minikube"
I0920 16:45:03.108340 19594 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0920 16:45:03.108345 19594 addons.go:234] Setting addon volcano=true in "minikube"
I0920 16:45:03.108351 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108361 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.108364 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.108363 19594 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:45:03.108378 19594 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0920 16:45:03.108378 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108388 19594 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0920 16:45:03.108391 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.108391 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108398 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.108399 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108403 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.108408 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.108409 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.108421 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.108432 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.107672 19594 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0920 16:45:03.108472 19594 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0920 16:45:03.108496 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.108577 19594 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0920 16:45:03.108595 19594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0920 16:45:03.107636 19594 addons.go:69] Setting yakd=true in profile "minikube"
I0920 16:45:03.108871 19594 addons.go:234] Setting addon yakd=true in "minikube"
I0920 16:45:03.108900 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.108980 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108993 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109021 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109084 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.109091 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.109098 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109104 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109127 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109133 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109191 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.109201 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109232 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109373 19594 out.go:177] * Configuring local host environment ...
I0920 16:45:03.108432 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109587 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.108316 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.109671 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.108368 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.109733 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109434 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.109752 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109784 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.109960 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.109999 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0920 16:45:03.111148 19594 out.go:270] *
W0920 16:45:03.111167 19594 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0920 16:45:03.111175 19594 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0920 16:45:03.111182 19594 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0920 16:45:03.111187 19594 out.go:270] *
W0920 16:45:03.111224 19594 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0920 16:45:03.111230 19594 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0920 16:45:03.111236 19594 out.go:270] *
W0920 16:45:03.111260 19594 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0920 16:45:03.111268 19594 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0920 16:45:03.111273 19594 out.go:270] *
W0920 16:45:03.111279 19594 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0920 16:45:03.111304 19594 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 16:45:03.112660 19594 out.go:177] * Verifying Kubernetes components...
I0920 16:45:03.114100 19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 16:45:03.128407 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.129533 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.129679 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.129737 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.137332 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.137361 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.137400 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.144198 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.137333 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.144496 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.144511 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.144548 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.145097 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.149474 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.149525 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.151612 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.151658 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.151961 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.152014 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.161595 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.163647 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.163711 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.166911 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.166929 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.166940 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.166978 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.167274 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.170530 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.170755 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.172231 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.172281 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.173098 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.173121 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.173432 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.175482 19594 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 16:45:03.176593 19594 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 16:45:03.176626 19594 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 16:45:03.176972 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2807691950 /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 16:45:03.178353 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.178404 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.178677 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.179014 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.180592 19594 out.go:177] - Using image docker.io/registry:2.8.3
I0920 16:45:03.182132 19594 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 16:45:03.183456 19594 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 16:45:03.183486 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 16:45:03.183621 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616937658 /etc/kubernetes/addons/registry-rc.yaml
I0920 16:45:03.185935 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.185964 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.186406 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.186462 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.190576 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.190598 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.191231 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.191283 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.191395 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.193591 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.193619 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.198306 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.198424 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.198454 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.198475 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.198982 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.199459 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.200011 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.201546 19594 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 16:45:03.202562 19594 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 16:45:03.202605 19594 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 16:45:03.202648 19594 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0920 16:45:03.202676 19594 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 16:45:03.203621 19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 16:45:03.203644 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 16:45:03.203750 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3074728425 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 16:45:03.203851 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423314052 /etc/kubernetes/addons/registry-svc.yaml
I0920 16:45:03.203932 19594 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:45:03.203950 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 16:45:03.204262 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3919251867 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:45:03.204553 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.204572 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.204604 19594 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 16:45:03.204632 19594 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 16:45:03.204761 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3081934045 /etc/kubernetes/addons/ig-namespace.yaml
I0920 16:45:03.204801 19594 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 16:45:03.204862 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 16:45:03.205051 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4291153230 /etc/kubernetes/addons/deployment.yaml
I0920 16:45:03.205440 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.205456 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.205771 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.205811 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.206307 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.206329 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.206716 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.206756 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.209671 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.210850 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.210873 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.211905 19594 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 16:45:03.214682 19594 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:03.214708 19594 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0920 16:45:03.214716 19594 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:03.214753 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:03.214918 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.215325 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.215737 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.216504 19594 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0920 16:45:03.216539 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.217121 19594 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0920 16:45:03.217134 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 16:45:03.217701 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.217722 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.217757 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.218192 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.218215 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.219955 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 16:45:03.220042 19594 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0920 16:45:03.221740 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 16:45:03.222565 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.222961 19594 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0920 16:45:03.223741 19594 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0920 16:45:03.223786 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.224612 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:03.224628 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:03.224673 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:03.224851 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 16:45:03.225641 19594 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0920 16:45:03.225675 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0920 16:45:03.226211 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1524924394 /etc/kubernetes/addons/volcano-deployment.yaml
I0920 16:45:03.227236 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.227257 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.232409 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.232431 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.234098 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.234116 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.236219 19594 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 16:45:03.236498 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.238243 19594 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 16:45:03.238288 19594 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 16:45:03.239492 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 16:45:03.239514 19594 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 16:45:03.239617 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2886266894 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 16:45:03.241029 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 16:45:03.242249 19594 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 16:45:03.243931 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 16:45:03.243960 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 16:45:03.244084 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube915938413 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 16:45:03.244228 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.244251 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:03.244828 19594 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 16:45:03.244862 19594 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 16:45:03.244995 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2077044191 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 16:45:03.246319 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 16:45:03.246699 19594 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:45:03.246726 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 16:45:03.246996 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1933288019 /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:45:03.250884 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 16:45:03.250928 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.250969 19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 16:45:03.251039 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1074630350 /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:03.250989 19594 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 16:45:03.251124 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 16:45:03.251184 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2884370248 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 16:45:03.253856 19594 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 16:45:03.255651 19594 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 16:45:03.255690 19594 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 16:45:03.255992 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2248737722 /etc/kubernetes/addons/yakd-ns.yaml
I0920 16:45:03.258614 19594 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 16:45:03.262153 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.265743 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 16:45:03.267470 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0920 16:45:03.274399 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 16:45:03.274438 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 16:45:03.274576 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1949191535 /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 16:45:03.276039 19594 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 16:45:03.276067 19594 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 16:45:03.276180 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229908915 /etc/kubernetes/addons/ig-role.yaml
I0920 16:45:03.279548 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:03.286583 19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:45:03.286613 19594 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 16:45:03.286669 19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 16:45:03.286694 19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 16:45:03.286733 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube838362008 /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:45:03.286817 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube974948099 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 16:45:03.287161 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 16:45:03.291057 19594 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 16:45:03.291081 19594 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 16:45:03.291165 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1813447522 /etc/kubernetes/addons/yakd-sa.yaml
I0920 16:45:03.292240 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.292314 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.294936 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 16:45:03.294967 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 16:45:03.295083 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4146359657 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 16:45:03.314422 19594 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 16:45:03.314457 19594 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 16:45:03.314601 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2980362868 /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 16:45:03.316517 19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 16:45:03.316545 19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 16:45:03.316744 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3230597966 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 16:45:03.320041 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:03.320097 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:03.320263 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 16:45:03.320281 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 16:45:03.320383 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3702161271 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 16:45:03.325646 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.325673 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.327798 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 16:45:03.328339 19594 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 16:45:03.328367 19594 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 16:45:03.328478 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876711540 /etc/kubernetes/addons/yakd-crb.yaml
I0920 16:45:03.337135 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.337182 19594 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:03.337201 19594 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0920 16:45:03.337212 19594 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:03.337246 19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:03.345570 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 16:45:03.345596 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 16:45:03.345601 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:03.345617 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:03.345729 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3361776696 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 16:45:03.350410 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:03.352357 19594 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 16:45:03.353794 19594 out.go:177] - Using image docker.io/busybox:stable
I0920 16:45:03.355184 19594 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:45:03.355212 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 16:45:03.355322 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2291821422 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:45:03.356907 19594 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 16:45:03.356931 19594 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 16:45:03.357054 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1786773614 /etc/kubernetes/addons/yakd-svc.yaml
I0920 16:45:03.360419 19594 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 16:45:03.360446 19594 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 16:45:03.360562 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2181556984 /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 16:45:03.375181 19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 16:45:03.375219 19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 16:45:03.376145 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1693979917 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 16:45:03.389466 19594 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:45:03.389492 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 16:45:03.389625 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube496196015 /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:45:03.392762 19594 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 16:45:03.392895 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2180015718 /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:03.402530 19594 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 16:45:03.402569 19594 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 16:45:03.402685 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2896344940 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 16:45:03.404082 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 16:45:03.408876 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 16:45:03.408904 19594 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 16:45:03.409017 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1983332709 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 16:45:03.422215 19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 16:45:03.422249 19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 16:45:03.422380 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3728481436 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 16:45:03.424108 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 16:45:03.426702 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 16:45:03.449765 19594 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 16:45:03.449814 19594 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 16:45:03.449956 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3366167828 /etc/kubernetes/addons/ig-crd.yaml
I0920 16:45:03.456917 19594 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:03.456950 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 16:45:03.457068 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3167440687 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:03.499198 19594 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:45:03.499229 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 16:45:03.499353 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube23259019 /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:45:03.514607 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 16:45:03.514652 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 16:45:03.514918 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3822892204 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 16:45:03.528636 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 16:45:03.530894 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:03.556312 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 16:45:03.556353 19594 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 16:45:03.556491 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2871550575 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 16:45:03.558663 19594 exec_runner.go:51] Run: sudo systemctl start kubelet
I0920 16:45:03.576752 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 16:45:03.576795 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 16:45:03.576946 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284233598 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 16:45:03.588142 19594 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0920 16:45:03.593861 19594 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0920 16:45:03.593886 19594 node_ready.go:38] duration metric: took 5.712961ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0920 16:45:03.593898 19594 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:45:03.605499 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 16:45:03.605533 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 16:45:03.605660 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1184704723 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 16:45:03.607548 19594 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:03.630484 19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:03.630523 19594 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 16:45:03.630658 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1796071118 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:03.684533 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 16:45:03.771271 19594 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0920 16:45:03.878624 19594 addons.go:475] Verifying addon registry=true in "minikube"
I0920 16:45:03.880660 19594 out.go:177] * Verifying registry addon...
I0920 16:45:03.895264 19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 16:45:03.899834 19594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 16:45:03.899854 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:04.190839 19594 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0920 16:45:04.281636 19594 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0920 16:45:04.377324 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090117959s)
I0920 16:45:04.398333 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:04.652538 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.123836851s)
I0920 16:45:04.720823 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.294055016s)
I0920 16:45:04.722449 19594 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0920 16:45:04.900007 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:05.132507 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.601534509s)
W0920 16:45:05.132549 19594 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:05.132574 19594 retry.go:31] will retry after 228.109144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 16:45:05.361287 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 16:45:05.399585 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:05.613663 19594 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:05.900461 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:06.084318 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.399723367s)
I0920 16:45:06.084356 19594 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0920 16:45:06.086527 19594 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 16:45:06.088544 19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:45:06.110127 19594 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:45:06.110149 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:06.275138 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.007630063s)
I0920 16:45:06.399090 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:06.594312 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:06.900131 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:07.093811 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:07.398807 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:07.593193 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:07.899396 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:08.093687 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:08.113285 19594 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:08.167454 19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.806115806s)
I0920 16:45:08.399265 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:08.593705 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:08.899868 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:09.093430 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:09.399923 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:09.594584 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:09.898987 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:10.094057 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:10.112185 19594 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:10.112205 19594 pod_ready.go:82] duration metric: took 6.504622461s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:10.112216 19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:10.268818 19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 16:45:10.268963 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3892026705 /var/lib/minikube/google_application_credentials.json
I0920 16:45:10.278568 19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 16:45:10.278692 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2117278926 /var/lib/minikube/google_cloud_project
I0920 16:45:10.287855 19594 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0920 16:45:10.287906 19594 host.go:66] Checking if "minikube" exists ...
I0920 16:45:10.288447 19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 16:45:10.288466 19594 api_server.go:166] Checking apiserver status ...
I0920 16:45:10.288498 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:10.304993 19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
I0920 16:45:10.316059 19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
I0920 16:45:10.316127 19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
I0920 16:45:10.325407 19594 api_server.go:204] freezer state: "THAWED"
I0920 16:45:10.325436 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:10.329684 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:10.329745 19594 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 16:45:10.393165 19594 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 16:45:10.399514 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:10.551098 19594 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 16:45:10.592758 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:10.614710 19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 16:45:10.614835 19594 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 16:45:10.615001 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2249469409 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 16:45:10.625062 19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 16:45:10.625091 19594 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 16:45:10.625196 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2737502880 /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 16:45:10.635460 19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:10.635498 19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 16:45:10.635608 19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2512882713 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:10.643485 19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 16:45:10.899580 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:10.994446 19594 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0920 16:45:10.996314 19594 out.go:177] * Verifying gcp-auth addon...
I0920 16:45:10.998484 19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 16:45:11.001204 19594 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 16:45:11.105022 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:11.399857 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:11.592766 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:11.899223 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:12.092219 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:12.117125 19594 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
I0920 16:45:12.399320 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:12.593212 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:12.899515 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:13.093335 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:13.398522 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:13.603631 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:13.618247 19594 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.618269 19594 pod_ready.go:82] duration metric: took 3.506046375s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.618281 19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.622810 19594 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.622831 19594 pod_ready.go:82] duration metric: took 4.542427ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.622843 19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4z8bv" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.627109 19594 pod_ready.go:93] pod "kube-proxy-4z8bv" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.627130 19594 pod_ready.go:82] duration metric: took 4.279162ms for pod "kube-proxy-4z8bv" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.627140 19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.631136 19594 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.631159 19594 pod_ready.go:82] duration metric: took 4.00971ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.631173 19594 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.635218 19594 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace has status "Ready":"True"
I0920 16:45:13.635235 19594 pod_ready.go:82] duration metric: took 4.054562ms for pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace to be "Ready" ...
I0920 16:45:13.635245 19594 pod_ready.go:39] duration metric: took 10.041333736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 16:45:13.635266 19594 api_server.go:52] waiting for apiserver process to appear ...
I0920 16:45:13.635319 19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 16:45:13.653910 19594 api_server.go:72] duration metric: took 10.542575198s to wait for apiserver process to appear ...
I0920 16:45:13.653931 19594 api_server.go:88] waiting for apiserver healthz status ...
I0920 16:45:13.653958 19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 16:45:13.657763 19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 16:45:13.658674 19594 api_server.go:141] control plane version: v1.31.1
I0920 16:45:13.658697 19594 api_server.go:131] duration metric: took 4.759044ms to wait for apiserver health ...
I0920 16:45:13.658706 19594 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 16:45:13.819986 19594 system_pods.go:59] 16 kube-system pods found
I0920 16:45:13.820019 19594 system_pods.go:61] "coredns-7c65d6cfc9-48qs6" [376bb7d3-255c-4beb-9c27-b35d4bd98a27] Running
I0920 16:45:13.820028 19594 system_pods.go:61] "csi-hostpath-attacher-0" [b1cb0be0-26d2-4a26-9781-f3c2fbc7f08d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:13.820035 19594 system_pods.go:61] "csi-hostpath-resizer-0" [8fc51a98-53ab-4075-87e9-50633ba372bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:13.820044 19594 system_pods.go:61] "csi-hostpathplugin-pgw8q" [70b5a0d8-d5f5-4712-93ca-dcb274c0f739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:13.820049 19594 system_pods.go:61] "etcd-ubuntu-20-agent-2" [922d60d1-b58c-4147-b1dc-1a00f4eeeb25] Running
I0920 16:45:13.820054 19594 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [64b9b3b3-ffdf-42e0-83f0-b13f88231b46] Running
I0920 16:45:13.820060 19594 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [b7657e3a-50d7-4294-946b-e4813531fecf] Running
I0920 16:45:13.820065 19594 system_pods.go:61] "kube-proxy-4z8bv" [4f24ae89-aadf-47b5-85f1-ea65df5a9426] Running
I0920 16:45:13.820071 19594 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [2f92fe05-36e9-4d75-8026-fb6d0e248c33] Running
I0920 16:45:13.820079 19594 system_pods.go:61] "metrics-server-84c5f94fbc-kmrlz" [6e334bf0-acd9-45f6-8232-8231952e001c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:13.820085 19594 system_pods.go:61] "nvidia-device-plugin-daemonset-c2k6b" [14de1edd-c7d5-44d9-881f-cad9fc8dffde] Running
I0920 16:45:13.820090 19594 system_pods.go:61] "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
I0920 16:45:13.820096 19594 system_pods.go:61] "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 16:45:13.820102 19594 system_pods.go:61] "snapshot-controller-56fcc65765-jbh9v" [0a86461e-e296-4306-943e-9f440c47dce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:13.820107 19594 system_pods.go:61] "snapshot-controller-56fcc65765-prdk4" [f9add925-1abf-48b7-86df-343057465374] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:13.820111 19594 system_pods.go:61] "storage-provisioner" [e1979350-1b14-4a06-9acb-a7845fc29294] Running
I0920 16:45:13.820116 19594 system_pods.go:74] duration metric: took 161.403892ms to wait for pod list to return data ...
I0920 16:45:13.820122 19594 default_sa.go:34] waiting for default service account to be created ...
I0920 16:45:13.898565 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:14.015107 19594 default_sa.go:45] found service account: "default"
I0920 16:45:14.015129 19594 default_sa.go:55] duration metric: took 195.001604ms for default service account to be created ...
I0920 16:45:14.015136 19594 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 16:45:14.103666 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:14.220940 19594 system_pods.go:86] 16 kube-system pods found
I0920 16:45:14.220967 19594 system_pods.go:89] "coredns-7c65d6cfc9-48qs6" [376bb7d3-255c-4beb-9c27-b35d4bd98a27] Running
I0920 16:45:14.220979 19594 system_pods.go:89] "csi-hostpath-attacher-0" [b1cb0be0-26d2-4a26-9781-f3c2fbc7f08d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 16:45:14.220987 19594 system_pods.go:89] "csi-hostpath-resizer-0" [8fc51a98-53ab-4075-87e9-50633ba372bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 16:45:14.220997 19594 system_pods.go:89] "csi-hostpathplugin-pgw8q" [70b5a0d8-d5f5-4712-93ca-dcb274c0f739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 16:45:14.221004 19594 system_pods.go:89] "etcd-ubuntu-20-agent-2" [922d60d1-b58c-4147-b1dc-1a00f4eeeb25] Running
I0920 16:45:14.221013 19594 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [64b9b3b3-ffdf-42e0-83f0-b13f88231b46] Running
I0920 16:45:14.221024 19594 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [b7657e3a-50d7-4294-946b-e4813531fecf] Running
I0920 16:45:14.221032 19594 system_pods.go:89] "kube-proxy-4z8bv" [4f24ae89-aadf-47b5-85f1-ea65df5a9426] Running
I0920 16:45:14.221039 19594 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [2f92fe05-36e9-4d75-8026-fb6d0e248c33] Running
I0920 16:45:14.221049 19594 system_pods.go:89] "metrics-server-84c5f94fbc-kmrlz" [6e334bf0-acd9-45f6-8232-8231952e001c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 16:45:14.221056 19594 system_pods.go:89] "nvidia-device-plugin-daemonset-c2k6b" [14de1edd-c7d5-44d9-881f-cad9fc8dffde] Running
I0920 16:45:14.221062 19594 system_pods.go:89] "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
I0920 16:45:14.221071 19594 system_pods.go:89] "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 16:45:14.221080 19594 system_pods.go:89] "snapshot-controller-56fcc65765-jbh9v" [0a86461e-e296-4306-943e-9f440c47dce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:14.221092 19594 system_pods.go:89] "snapshot-controller-56fcc65765-prdk4" [f9add925-1abf-48b7-86df-343057465374] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 16:45:14.221098 19594 system_pods.go:89] "storage-provisioner" [e1979350-1b14-4a06-9acb-a7845fc29294] Running
I0920 16:45:14.221109 19594 system_pods.go:126] duration metric: took 205.964397ms to wait for k8s-apps to be running ...
I0920 16:45:14.221121 19594 system_svc.go:44] waiting for kubelet service to be running ....
I0920 16:45:14.221171 19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0920 16:45:14.233099 19594 system_svc.go:56] duration metric: took 11.972413ms WaitForService to wait for kubelet
I0920 16:45:14.233124 19594 kubeadm.go:582] duration metric: took 11.121796723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 16:45:14.233140 19594 node_conditions.go:102] verifying NodePressure condition ...
I0920 16:45:14.400042 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:14.416491 19594 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0920 16:45:14.416523 19594 node_conditions.go:123] node cpu capacity is 8
I0920 16:45:14.416539 19594 node_conditions.go:105] duration metric: took 183.393858ms to run NodePressure ...
I0920 16:45:14.416554 19594 start.go:241] waiting for startup goroutines ...
I0920 16:45:14.592491 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:14.898174 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:15.092941 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:15.398244 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:15.592884 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:15.899876 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.092983 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:16.434295 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:16.593292 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:16.898512 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:17.092282 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.399049 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:17.593107 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:17.899307 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.104272 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.398574 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:18.592574 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:18.899672 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:19.093034 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.399635 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:19.592973 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:19.899725 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.093304 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.399493 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:20.592982 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:20.898978 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.099216 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.398618 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:21.593738 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:21.899279 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.093335 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.398513 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 16:45:22.593181 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:22.901022 19594 kapi.go:107] duration metric: took 19.005760315s to wait for kubernetes.io/minikube-addons=registry ...
I0920 16:45:23.093317 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:23.593663 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.093391 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:24.592831 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.093246 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:25.604406 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.104373 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:26.594032 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.092496 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:27.604108 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.092996 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:28.603995 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.103471 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:29.593335 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.093882 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:30.710036 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.104005 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:31.592995 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.092506 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:32.592596 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.094155 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:33.603970 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.093481 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:34.603518 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.092108 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:35.593166 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.093546 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:36.603368 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.093143 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:37.593074 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.093982 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:38.640081 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.093303 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:39.592894 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.093246 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:40.593408 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.093679 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:41.593607 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 16:45:42.104006 19594 kapi.go:107] duration metric: took 36.015460603s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 16:45:52.502298 19594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 16:45:52.502327 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:53.002323 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:53.507422 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:54.002263 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:54.502434 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:55.001199 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:55.502330 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:56.001429 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:56.502080 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:57.001848 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:57.502220 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:58.002673 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:58.501553 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:59.001774 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:45:59.502109 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:00.002575 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:00.501622 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:01.001339 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:01.501569 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:02.001766 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:02.501872 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:03.001839 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:03.501954 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:04.002217 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:04.502435 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:05.001245 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:05.502421 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:06.001606 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:06.501754 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:07.001336 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:07.501544 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:08.001641 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:08.501747 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:09.001950 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:09.501866 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:10.002090 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:10.501937 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:11.001818 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:11.501819 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:12.002083 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:12.502026 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:13.001875 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:13.501900 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:14.002223 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:14.503063 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:15.002382 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:15.502049 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:16.001443 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:16.501424 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:17.001422 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:17.501838 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:18.001623 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:18.502954 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:19.002086 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:19.502397 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:20.001454 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:20.501455 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:21.002118 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:21.502014 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:22.002311 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:22.502666 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:23.001307 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:23.502977 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:24.001877 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:24.502024 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:25.002268 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:25.502280 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:26.003065 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:26.501958 19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 16:46:27.002052 19594 kapi.go:107] duration metric: took 1m16.00355175s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 16:46:27.003563 19594 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0920 16:46:27.004960 19594 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 16:46:27.006230 19594 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 16:46:27.007646 19594 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0920 16:46:27.009203 19594 addons.go:510] duration metric: took 1m23.901694197s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner metrics-server storage-provisioner-rancher storage-provisioner inspektor-gadget yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0920 16:46:27.009244 19594 start.go:246] waiting for cluster config update ...
I0920 16:46:27.009258 19594 start.go:255] writing updated cluster config ...
I0920 16:46:27.009493 19594 exec_runner.go:51] Run: rm -f paused
I0920 16:46:27.052475 19594 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 16:46:27.054553 19594 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Wed 2024-08-07 18:08:31 UTC, end at Fri 2024-09-20 16:56:19 UTC. --
Sep 20 16:47:51 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:47:51.504005020Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=eb25a3e46337e545 traceID=84390dab59c5ad9e70fc04dfdcfc5587
Sep 20 16:47:51 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:47:51.506125748Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=eb25a3e46337e545 traceID=84390dab59c5ad9e70fc04dfdcfc5587
Sep 20 16:48:31 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:48:31Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.505147433Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b902693095f685ee traceID=7c6cf74371c1a3bad6e3556ce1d1dd31
Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.507143930Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b902693095f685ee traceID=7c6cf74371c1a3bad6e3556ce1d1dd31
Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.896726893Z" level=info msg="ignoring event" container=e05d41be5b57769289988577a4ba80825a1558b3f27e0e8c013cd7f2d23b5f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:49:56 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:49:56.509002605Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0499bad85de9dc14 traceID=d887bbb31d5456c2cdfb705d2cb5e4af
Sep 20 16:49:56 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:49:56.510819770Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0499bad85de9dc14 traceID=d887bbb31d5456c2cdfb705d2cb5e4af
Sep 20 16:51:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:51:19Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 20 16:51:20 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:51:20.927736334Z" level=info msg="ignoring event" container=761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:52:46 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:52:46.506028554Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=21b665320e391c53 traceID=7d9297b08aedd25f5d4127ea53b5200f
Sep 20 16:52:46 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:52:46.508174803Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=21b665320e391c53 traceID=7d9297b08aedd25f5d4127ea53b5200f
Sep 20 16:55:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:55:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6585c043c28edaa1f85cd6168102f26b883df41566d3e3b63d047ce5248c2334/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 20 16:55:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:19.567053458Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=26a9ced306de28e2 traceID=31e27a1a05e2ef1bea117b3244f834b8
Sep 20 16:55:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:19.569055274Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=26a9ced306de28e2 traceID=31e27a1a05e2ef1bea117b3244f834b8
Sep 20 16:55:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:32.500023546Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8853d6cbdd932768 traceID=7f10c73c88caa084e5cab12a0e2168d0
Sep 20 16:55:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:32.502266435Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8853d6cbdd932768 traceID=7f10c73c88caa084e5cab12a0e2168d0
Sep 20 16:56:01 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:01.494598498Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=83ffe8c570fd226b traceID=fa8ff08aeca36425f434a8e8fd0d1b5d
Sep 20 16:56:01 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:01.496770565Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=83ffe8c570fd226b traceID=fa8ff08aeca36425f434a8e8fd0d1b5d
Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.042282190Z" level=info msg="ignoring event" container=6585c043c28edaa1f85cd6168102f26b883df41566d3e3b63d047ce5248c2334 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.304853545Z" level=info msg="ignoring event" container=3b77e8ce3973de48519d5e3f1462ffb5c19bb2238b9610ae64ee0ed8e6cdacfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.363417344Z" level=info msg="ignoring event" container=3499c33f6dc7f72c07fd07e64b7a203c8ea0d6100d362cf0a01f08fd49be947d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.442165292Z" level=info msg="ignoring event" container=7bcfe028e3c17ae776dc7da8b5ff8d2ba8ac38904882095578441f922672050b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 16:56:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:56:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-9zk5q_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.523056670Z" level=info msg="ignoring event" container=f59fd69f5f624a64712267dd52b9d7ddcf6be20e37547c0bf37be26dff631f2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
761272c499070 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 5 minutes ago Exited gadget 6 a3702797bf688 gadget-rqjhz
b824c446317f0 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 0b6d83a6662a5 gcp-auth-89d5ffd79-l7c52
9233cc1a286bb registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 fe371be30d5df csi-hostpathplugin-pgw8q
c1d3e9e939246 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 fe371be30d5df csi-hostpathplugin-pgw8q
e55441060dd21 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 fe371be30d5df csi-hostpathplugin-pgw8q
1b5567bbc8b9c registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 fe371be30d5df csi-hostpathplugin-pgw8q
8de13437db741 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 fe371be30d5df csi-hostpathplugin-pgw8q
6282e18c0c588 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 fe371be30d5df csi-hostpathplugin-pgw8q
d8ea8dd70dd1e registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 62deafb3d43de csi-hostpath-attacher-0
de3920beebf4e registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 ca72ed8e700de csi-hostpath-resizer-0
6ee920900318e registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 74b81a5c7cf84 snapshot-controller-56fcc65765-jbh9v
1c5caf88fe99f registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 e418ffd829e68 snapshot-controller-56fcc65765-prdk4
bada2cb22b84e marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 764c7f4a16df2 yakd-dashboard-67d98fc6b-bncjr
3499c33f6dc7f gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 10 minutes ago Exited registry-proxy 0 f59fd69f5f624 registry-proxy-9zk5q
2224374459b01 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 11 minutes ago Running local-path-provisioner 0 257ecc7c29379 local-path-provisioner-86d989889c-8h6tk
bfd9cd31093ba gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 8a48b35ba47c2 cloud-spanner-emulator-769b77f747-q25bc
8487a72983672 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 11 minutes ago Running metrics-server 0 8e6c9063ffa4c metrics-server-84c5f94fbc-kmrlz
3b77e8ce3973d registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 11 minutes ago Exited registry 0 7bcfe028e3c17 registry-66c9cd494c-8c7tp
d76b642064b61 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 0ffc5659b2f6f nvidia-device-plugin-daemonset-c2k6b
bb39a019aea73 6e38f40d628db 11 minutes ago Running storage-provisioner 0 7653e9d16f06b storage-provisioner
ddb373bc098c1 c69fa2e9cbf5f 11 minutes ago Running coredns 0 ae5ee16ca3aa4 coredns-7c65d6cfc9-48qs6
64711f7fb5fe2 60c005f310ff3 11 minutes ago Running kube-proxy 0 78f0aeb605b90 kube-proxy-4z8bv
8b1d9d632055c 6bab7719df100 11 minutes ago Running kube-apiserver 0 a6cad0f7b31b9 kube-apiserver-ubuntu-20-agent-2
d12067daeed64 2e96e5913fc06 11 minutes ago Running etcd 0 12ab18da970db etcd-ubuntu-20-agent-2
d4a743beacae2 9aa1fad941575 11 minutes ago Running kube-scheduler 0 af2e2f749d164 kube-scheduler-ubuntu-20-agent-2
8c1debcecf77e 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 6f7d19d2fe08f kube-controller-manager-ubuntu-20-agent-2
==> coredns [ddb373bc098c] <==
[INFO] 10.244.0.8:34085 - 25797 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066024s
[INFO] 10.244.0.8:53200 - 38483 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045025s
[INFO] 10.244.0.8:53200 - 18005 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078633s
[INFO] 10.244.0.8:60849 - 15352 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000060848s
[INFO] 10.244.0.8:60849 - 56314 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000093917s
[INFO] 10.244.0.8:36641 - 28747 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000073202s
[INFO] 10.244.0.8:36641 - 64584 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000107707s
[INFO] 10.244.0.8:45074 - 51864 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000065108s
[INFO] 10.244.0.8:45074 - 59802 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000095049s
[INFO] 10.244.0.8:39482 - 34201 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072183s
[INFO] 10.244.0.8:39482 - 1182 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119903s
[INFO] 10.244.0.23:52062 - 45329 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00028552s
[INFO] 10.244.0.23:49914 - 11992 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135866s
[INFO] 10.244.0.23:46422 - 53216 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132762s
[INFO] 10.244.0.23:54219 - 20616 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144431s
[INFO] 10.244.0.23:46848 - 38267 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143456s
[INFO] 10.244.0.23:53945 - 64382 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000221399s
[INFO] 10.244.0.23:43151 - 31082 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003237774s
[INFO] 10.244.0.23:58751 - 12471 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005385542s
[INFO] 10.244.0.23:35640 - 56857 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003051086s
[INFO] 10.244.0.23:53411 - 36226 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003179774s
[INFO] 10.244.0.23:50848 - 16832 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003459792s
[INFO] 10.244.0.23:49092 - 16708 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003632444s
[INFO] 10.244.0.23:41497 - 32368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001446385s
[INFO] 10.244.0.23:53917 - 52470 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002049886s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T16_44_58_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 16:44:55 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 16:56:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 16:52:06 +0000 Fri, 20 Sep 2024 16:44:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 16:52:06 +0000 Fri, 20 Sep 2024 16:44:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 16:52:06 +0000 Fri, 20 Sep 2024 16:44:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 16:52:06 +0000 Fri, 20 Sep 2024 16:44:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: 0fd695e7-50c5-4838-9acc-b2d1cdaf04a4
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.0
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m15s
default cloud-spanner-emulator-769b77f747-q25bc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-rqjhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-l7c52 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-48qs6 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-pgw8q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-4z8bv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-kmrlz 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-c2k6b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-jbh9v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-prdk4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-8h6tk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-bncjr 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 0e 35 92 0d 81 08 06
[ +0.033126] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 ed 9b 64 03 38 08 06
[ +2.557801] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 8e 2f 33 38 66 08 06
[ +1.916982] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 1e 57 a1 e9 51 08 06
[ +3.689238] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 82 f5 1c a3 4e 08 06
[ +2.838983] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 c9 4a b5 06 2a 08 06
[ +0.097061] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 ac 0e a0 18 49 08 06
[ +0.186938] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 2b 31 ce 69 e1 08 06
[ +0.043588] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 64 a8 66 ec 3c 08 06
[Sep20 16:46] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 19 d5 e5 94 81 08 06
[ +0.028237] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 64 ce 6f 7e 5a 08 06
[ +10.731072] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 d3 ac e2 03 bd 08 06
[ +0.000441] IPv4: martian source 10.244.0.23 from 10.244.0.5, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a 4e 41 44 28 37 08 06
==> etcd [d12067daeed6] <==
{"level":"info","ts":"2024-09-20T16:44:54.668119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-20T16:44:54.668146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-09-20T16:44:54.668161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-09-20T16:44:54.668168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-20T16:44:54.668179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-20T16:44:54.668189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-20T16:44:54.669018Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-20T16:44:54.669023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T16:44:54.669046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T16:44:54.669189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-20T16:44:54.669229Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-20T16:44:54.669328Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:54.670241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:54.670296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T16:44:54.670306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:54.670326Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T16:44:54.670426Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T16:44:54.671298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-20T16:44:54.671500Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-20T16:45:21.392027Z","caller":"traceutil/trace.go:171","msg":"trace[534191523] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"116.649023ms","start":"2024-09-20T16:45:21.275363Z","end":"2024-09-20T16:45:21.392012Z","steps":["trace[534191523] 'process raft request' (duration: 116.545008ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T16:45:30.707679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.905127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-20T16:45:30.707742Z","caller":"traceutil/trace.go:171","msg":"trace[1531925896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"117.008974ms","start":"2024-09-20T16:45:30.590722Z","end":"2024-09-20T16:45:30.707731Z","steps":["trace[1531925896] 'range keys from in-memory index tree' (duration: 116.857355ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T16:54:54.972943Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1679}
{"level":"info","ts":"2024-09-20T16:54:54.995958Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1679,"took":"22.529422ms","hash":3710773180,"current-db-size-bytes":8122368,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4227072,"current-db-size-in-use":"4.2 MB"}
{"level":"info","ts":"2024-09-20T16:54:54.996008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710773180,"revision":1679,"compact-revision":-1}
==> gcp-auth [b824c446317f] <==
2024/09/20 16:46:26 GCP Auth Webhook started!
2024/09/20 16:46:42 Ready to marshal response ...
2024/09/20 16:46:42 Ready to write response ...
2024/09/20 16:46:43 Ready to marshal response ...
2024/09/20 16:46:43 Ready to write response ...
2024/09/20 16:47:05 Ready to marshal response ...
2024/09/20 16:47:05 Ready to write response ...
2024/09/20 16:47:05 Ready to marshal response ...
2024/09/20 16:47:05 Ready to write response ...
2024/09/20 16:47:05 Ready to marshal response ...
2024/09/20 16:47:05 Ready to write response ...
2024/09/20 16:55:18 Ready to marshal response ...
2024/09/20 16:55:18 Ready to write response ...
==> kernel <==
16:56:20 up 38 min, 0 users, load average: 0.14, 0.31, 0.35
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [8b1d9d632055] <==
W0920 16:45:45.078414 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.226.206:443: connect: connection refused
W0920 16:45:52.003518 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
E0920 16:45:52.003558 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
W0920 16:46:14.022130 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
E0920 16:46:14.022166 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
W0920 16:46:14.030024 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
E0920 16:46:14.030070 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
I0920 16:46:42.298640 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0920 16:46:42.314913 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0920 16:46:55.680899 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0920 16:46:55.691765 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0920 16:46:55.810373 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 16:46:55.815031 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 16:46:55.827739 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0920 16:46:55.863481 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 16:46:55.992205 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 16:46:56.005837 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 16:46:56.025290 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0920 16:46:56.708762 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0920 16:46:56.852938 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0920 16:46:56.863912 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0920 16:46:56.964024 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0920 16:46:56.964052 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0920 16:46:57.025425 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0920 16:46:57.199533 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [8c1debcecf77] <==
W0920 16:55:19.000015 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:19.000059 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:21.307948 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:21.307990 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:21.743080 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:21.743122 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:22.158740 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:22.158784 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:25.754361 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:25.754403 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:36.376454 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:36.376503 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:39.380105 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:39.380147 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:54.077922 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:54.077966 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:55:54.266841 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:55:54.266890 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:56:13.234577 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:13.234624 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:56:14.906970 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:14.907012 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 16:56:19.045721 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 16:56:19.045768 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 16:56:19.270096 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.693µs"
==> kube-proxy [64711f7fb5fe] <==
I0920 16:45:04.799249 1 server_linux.go:66] "Using iptables proxy"
I0920 16:45:04.952709 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0920 16:45:04.952795 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 16:45:05.080907 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0920 16:45:05.080969 1 server_linux.go:169] "Using iptables Proxier"
I0920 16:45:05.086446 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 16:45:05.086788 1 server.go:483] "Version info" version="v1.31.1"
I0920 16:45:05.086818 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 16:45:05.089729 1 config.go:199] "Starting service config controller"
I0920 16:45:05.089757 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 16:45:05.089780 1 config.go:105] "Starting endpoint slice config controller"
I0920 16:45:05.089789 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 16:45:05.090865 1 config.go:328] "Starting node config controller"
I0920 16:45:05.090882 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 16:45:05.190317 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0920 16:45:05.190401 1 shared_informer.go:320] Caches are synced for service config
I0920 16:45:05.191197 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [d4a743beacae] <==
W0920 16:44:55.903435 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:55.903461 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:55.903513 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0920 16:44:55.903514 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:55.903539 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
E0920 16:44:55.903543 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:55.903739 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0920 16:44:55.903757 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 16:44:55.903771 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0920 16:44:55.903779 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:56.759622 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0920 16:44:56.759661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:56.810431 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0920 16:44:56.810470 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 16:44:56.820979 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 16:44:56.821017 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 16:44:56.933897 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:56.933945 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:56.995374 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 16:44:56.995410 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0920 16:44:57.024903 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 16:44:57.024942 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 16:44:57.042268 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0920 16:44:57.042314 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0920 16:44:59.602146 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Wed 2024-08-07 18:08:31 UTC, end at Fri 2024-09-20 16:56:20 UTC. --
Sep 20 16:55:46 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:46.352623 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
Sep 20 16:55:48 ubuntu-20-agent-2 kubelet[21031]: I0920 16:55:48.350926 21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
Sep 20 16:55:48 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:48.351096 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
Sep 20 16:55:58 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:58.353386 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb6d2fa1-ad89-4feb-92c6-a1bec468fff3"
Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:01.350196 21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.350379 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.497266 21031 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.497439 21031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9k7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(82d10f6c-9141-435e-ab8e-d1cb8af8b80a): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.498623 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
Sep 20 16:56:11 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:11.352881 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb6d2fa1-ad89-4feb-92c6-a1bec468fff3"
Sep 20 16:56:14 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:14.350543 21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
Sep 20 16:56:14 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:14.350774 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
Sep 20 16:56:15 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:15.352790 21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207049 21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds\") pod \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\" (UID: \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\") "
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207116 21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9k7k\" (UniqueName: \"kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k\") pod \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\" (UID: \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\") "
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207124 21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "82d10f6c-9141-435e-ab8e-d1cb8af8b80a" (UID: "82d10f6c-9141-435e-ab8e-d1cb8af8b80a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207205 21031 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.209045 21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k" (OuterVolumeSpecName: "kube-api-access-t9k7k") pod "82d10f6c-9141-435e-ab8e-d1cb8af8b80a" (UID: "82d10f6c-9141-435e-ab8e-d1cb8af8b80a"). InnerVolumeSpecName "kube-api-access-t9k7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.308272 21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t9k7k\" (UniqueName: \"kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.610186 21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxjgr\" (UniqueName: \"kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr\") pod \"be7ec7f6-7cec-4f63-bab2-8844fbb26f79\" (UID: \"be7ec7f6-7cec-4f63-bab2-8844fbb26f79\") "
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.610230 21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-494s2\" (UniqueName: \"kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2\") pod \"7bdaa858-4534-4dbd-b767-3de12e3d88ce\" (UID: \"7bdaa858-4534-4dbd-b767-3de12e3d88ce\") "
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.612531 21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2" (OuterVolumeSpecName: "kube-api-access-494s2") pod "7bdaa858-4534-4dbd-b767-3de12e3d88ce" (UID: "7bdaa858-4534-4dbd-b767-3de12e3d88ce"). InnerVolumeSpecName "kube-api-access-494s2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.612654 21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr" (OuterVolumeSpecName: "kube-api-access-pxjgr") pod "be7ec7f6-7cec-4f63-bab2-8844fbb26f79" (UID: "be7ec7f6-7cec-4f63-bab2-8844fbb26f79"). InnerVolumeSpecName "kube-api-access-pxjgr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.711054 21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pxjgr\" (UniqueName: \"kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.711086 21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-494s2\" (UniqueName: \"kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
==> storage-provisioner [bb39a019aea7] <==
I0920 16:45:05.601816 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 16:45:05.623279 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 16:45:05.623351 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 16:45:05.635294 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 16:45:05.635539 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f!
I0920 16:45:05.636938 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12ed9626-6ef3-4ef7-a1fd-06f621a5fa2e", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f became leader
I0920 16:45:05.736661 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q: exit status 1 (80.048525ms)
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Fri, 20 Sep 2024 16:47:05 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5bcm (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-x5bcm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m15s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m48s (x4 over 9m14s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m48s (x4 over 9m14s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m48s (x4 over 9m14s) kubelet Error: ErrImagePull
Warning Failed 7m20s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m3s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "registry-66c9cd494c-8c7tp" not found
Error from server (NotFound): pods "registry-proxy-9zk5q" not found
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.82s)