=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.571808ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003529724s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0038581s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080867199s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/20 21:00:28 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:37117 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:48 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | 20 Sep 24 20:48 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | 20 Sep 24 20:50 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 20 Sep 24 20:51 UTC | 20 Sep 24 20:51 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/20 20:48:54
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0920 20:48:54.731823 20180 out.go:345] Setting OutFile to fd 1 ...
I0920 20:48:54.732106 20180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 20:48:54.732118 20180 out.go:358] Setting ErrFile to fd 2...
I0920 20:48:54.732125 20180 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 20:48:54.732329 20180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
I0920 20:48:54.732918 20180 out.go:352] Setting JSON to false
I0920 20:48:54.733832 20180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1880,"bootTime":1726863455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0920 20:48:54.733890 20180 start.go:139] virtualization: kvm guest
I0920 20:48:54.736013 20180 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0920 20:48:54.737293 20180 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
I0920 20:48:54.737340 20180 notify.go:220] Checking for updates...
I0920 20:48:54.737362 20180 out.go:177] - MINIKUBE_LOCATION=19672
I0920 20:48:54.738763 20180 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0920 20:48:54.739911 20180 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
I0920 20:48:54.741291 20180 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
I0920 20:48:54.742634 20180 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0920 20:48:54.743981 20180 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0920 20:48:54.745418 20180 driver.go:394] Setting default libvirt URI to qemu:///system
I0920 20:48:54.755901 20180 out.go:177] * Using the none driver based on user configuration
I0920 20:48:54.766120 20180 start.go:297] selected driver: none
I0920 20:48:54.766138 20180 start.go:901] validating driver "none" against <nil>
I0920 20:48:54.766150 20180 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0920 20:48:54.766196 20180 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0920 20:48:54.766506 20180 out.go:270] ! The 'none' driver does not respect the --memory flag
I0920 20:48:54.767070 20180 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0920 20:48:54.767331 20180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 20:48:54.767360 20180 cni.go:84] Creating CNI manager for ""
I0920 20:48:54.767407 20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 20:48:54.767419 20180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0920 20:48:54.767451 20180 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 20:48:54.768942 20180 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0920 20:48:54.770785 20180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json ...
I0920 20:48:54.770814 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json: {Name:mkc9bc0ce17452b3786f4c22062e0f8d94946f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:54.770943 20180 start.go:360] acquireMachinesLock for minikube: {Name:mkf9700fb566525b72391541d3ef90c9358e650d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0920 20:48:54.770980 20180 start.go:364] duration metric: took 21.858µs to acquireMachinesLock for "minikube"
I0920 20:48:54.770998 20180 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 20:48:54.771060 20180 start.go:125] createHost starting for "" (driver="none")
I0920 20:48:54.773431 20180 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0920 20:48:54.774675 20180 exec_runner.go:51] Run: systemctl --version
I0920 20:48:54.777223 20180 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0920 20:48:54.777263 20180 client.go:168] LocalClient.Create starting
I0920 20:48:54.777358 20180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca.pem
I0920 20:48:54.777396 20180 main.go:141] libmachine: Decoding PEM data...
I0920 20:48:54.777417 20180 main.go:141] libmachine: Parsing certificate...
I0920 20:48:54.777492 20180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9477/.minikube/certs/cert.pem
I0920 20:48:54.777520 20180 main.go:141] libmachine: Decoding PEM data...
I0920 20:48:54.777539 20180 main.go:141] libmachine: Parsing certificate...
I0920 20:48:54.777985 20180 client.go:171] duration metric: took 712.213µs to LocalClient.Create
I0920 20:48:54.778014 20180 start.go:167] duration metric: took 802.314µs to libmachine.API.Create "minikube"
I0920 20:48:54.778024 20180 start.go:293] postStartSetup for "minikube" (driver="none")
I0920 20:48:54.778072 20180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0920 20:48:54.778130 20180 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0920 20:48:54.788460 20180 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0920 20:48:54.788480 20180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0920 20:48:54.788489 20180 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0920 20:48:54.790360 20180 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0920 20:48:54.791539 20180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9477/.minikube/addons for local assets ...
I0920 20:48:54.791579 20180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9477/.minikube/files for local assets ...
I0920 20:48:54.791596 20180 start.go:296] duration metric: took 13.566765ms for postStartSetup
I0920 20:48:54.792141 20180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json ...
I0920 20:48:54.792260 20180 start.go:128] duration metric: took 21.192568ms to createHost
I0920 20:48:54.792271 20180 start.go:83] releasing machines lock for "minikube", held for 21.280918ms
I0920 20:48:54.792579 20180 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0920 20:48:54.792629 20180 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0920 20:48:54.794371 20180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0920 20:48:54.794420 20180 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0920 20:48:54.803347 20180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0920 20:48:54.803368 20180 start.go:495] detecting cgroup driver to use...
I0920 20:48:54.803389 20180 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 20:48:54.803467 20180 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 20:48:54.820192 20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0920 20:48:54.829977 20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0920 20:48:54.839637 20180 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0920 20:48:54.839690 20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0920 20:48:54.847655 20180 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 20:48:54.857029 20180 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0920 20:48:54.865412 20180 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0920 20:48:54.873410 20180 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0920 20:48:54.881151 20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0920 20:48:54.889916 20180 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0920 20:48:54.898951 20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0920 20:48:54.907514 20180 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0920 20:48:54.914661 20180 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0920 20:48:54.922542 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:48:55.138161 20180 exec_runner.go:51] Run: sudo systemctl restart containerd
I0920 20:48:55.204931 20180 start.go:495] detecting cgroup driver to use...
I0920 20:48:55.204985 20180 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0920 20:48:55.205101 20180 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0920 20:48:55.224157 20180 exec_runner.go:51] Run: which cri-dockerd
I0920 20:48:55.225048 20180 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0920 20:48:55.232688 20180 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0920 20:48:55.232711 20180 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 20:48:55.232740 20180 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 20:48:55.239830 20180 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0920 20:48:55.239956 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2618628913 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0920 20:48:55.247336 20180 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0920 20:48:55.465105 20180 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0920 20:48:55.686244 20180 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0920 20:48:55.686428 20180 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0920 20:48:55.686445 20180 exec_runner.go:203] rm: /etc/docker/daemon.json
I0920 20:48:55.686491 20180 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0920 20:48:55.695894 20180 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0920 20:48:55.696040 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube668087537 /etc/docker/daemon.json
I0920 20:48:55.704282 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:48:55.917476 20180 exec_runner.go:51] Run: sudo systemctl restart docker
I0920 20:48:56.212394 20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0920 20:48:56.222755 20180 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0920 20:48:56.237285 20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 20:48:56.247742 20180 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0920 20:48:56.461712 20180 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0920 20:48:56.671792 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:48:56.890919 20180 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0920 20:48:56.904298 20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0920 20:48:56.914810 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:48:57.150948 20180 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0920 20:48:57.216627 20180 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0920 20:48:57.216707 20180 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0920 20:48:57.218029 20180 start.go:563] Will wait 60s for crictl version
I0920 20:48:57.218064 20180 exec_runner.go:51] Run: which crictl
I0920 20:48:57.218886 20180 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0920 20:48:57.247068 20180 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0920 20:48:57.247136 20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 20:48:57.267667 20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 20:48:57.290063 20180 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0920 20:48:57.290150 20180 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0920 20:48:57.292949 20180 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0920 20:48:57.294149 20180 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0920 20:48:57.294258 20180 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 20:48:57.294270 20180 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0920 20:48:57.294358 20180 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0920 20:48:57.294407 20180 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0920 20:48:57.340662 20180 cni.go:84] Creating CNI manager for ""
I0920 20:48:57.340687 20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 20:48:57.340702 20180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0920 20:48:57.340722 20180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0920 20:48:57.340886 20180 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0920 20:48:57.340955 20180 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0920 20:48:57.349169 20180 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0920 20:48:57.349216 20180 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0920 20:48:57.358252 20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0920 20:48:57.358254 20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0920 20:48:57.358285 20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0920 20:48:57.358321 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0920 20:48:57.358347 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0920 20:48:57.358289 20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0920 20:48:57.370816 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0920 20:48:57.405134 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2135074387 /var/lib/minikube/binaries/v1.31.1/kubectl
I0920 20:48:57.408030 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1924790027 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0920 20:48:57.435132 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114419049 /var/lib/minikube/binaries/v1.31.1/kubelet
I0920 20:48:57.499241 20180 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0920 20:48:57.507525 20180 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0920 20:48:57.507546 20180 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 20:48:57.507579 20180 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 20:48:57.516376 20180 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0920 20:48:57.516505 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2305222324 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0920 20:48:57.524120 20180 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0920 20:48:57.524137 20180 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0920 20:48:57.524167 20180 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0920 20:48:57.531313 20180 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0920 20:48:57.531432 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube422082734 /lib/systemd/system/kubelet.service
I0920 20:48:57.538646 20180 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0920 20:48:57.538739 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3742626517 /var/tmp/minikube/kubeadm.yaml.new
I0920 20:48:57.545952 20180 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0920 20:48:57.547147 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:48:57.762774 20180 exec_runner.go:51] Run: sudo systemctl start kubelet
I0920 20:48:57.776587 20180 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube for IP: 10.138.0.48
I0920 20:48:57.776611 20180 certs.go:194] generating shared ca certs ...
I0920 20:48:57.776628 20180 certs.go:226] acquiring lock for ca certs: {Name:mk1d6196dbc1689b3628478a0c39c96ca2cfb8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:57.776755 20180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.key
I0920 20:48:57.776794 20180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.key
I0920 20:48:57.776803 20180 certs.go:256] generating profile certs ...
I0920 20:48:57.776854 20180 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key
I0920 20:48:57.776867 20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt with IP's: []
I0920 20:48:57.923963 20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt ...
I0920 20:48:57.923991 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt: {Name:mk88860d394f74c51eb6ce8b308d957fce763fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:57.924143 20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key ...
I0920 20:48:57.924155 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key: {Name:mk70b474052d33ba900a8a63ae147fa88926b935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:57.924236 20180 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a
I0920 20:48:57.924252 20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0920 20:48:58.073293 20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0920 20:48:58.073322 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk39f9363908008685c9b4b09227e07812e5fb7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:58.073465 20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0920 20:48:58.073479 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk088cecaa082d954c89f523dd8f0cee0ee4e606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:58.073554 20180 certs.go:381] copying /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt
I0920 20:48:58.073652 20180 certs.go:385] copying /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key
I0920 20:48:58.073707 20180 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key
I0920 20:48:58.073720 20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0920 20:48:58.169907 20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt ...
I0920 20:48:58.169936 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt: {Name:mk38f82ddc9a7f07e6396525e685b8dd38ecef11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:58.170076 20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key ...
I0920 20:48:58.170090 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key: {Name:mk031c3fe36dff123c932b5e7c780e82e1def28a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:48:58.170251 20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca-key.pem (1679 bytes)
I0920 20:48:58.170283 20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca.pem (1078 bytes)
I0920 20:48:58.170306 20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/cert.pem (1123 bytes)
I0920 20:48:58.170329 20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/key.pem (1675 bytes)
I0920 20:48:58.170899 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0920 20:48:58.171014 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1713794039 /var/lib/minikube/certs/ca.crt
I0920 20:48:58.179515 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0920 20:48:58.179617 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4016040475 /var/lib/minikube/certs/ca.key
I0920 20:48:58.187092 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0920 20:48:58.187207 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3960471729 /var/lib/minikube/certs/proxy-client-ca.crt
I0920 20:48:58.194354 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0920 20:48:58.194448 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3030144724 /var/lib/minikube/certs/proxy-client-ca.key
I0920 20:48:58.201715 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0920 20:48:58.201817 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3007035077 /var/lib/minikube/certs/apiserver.crt
I0920 20:48:58.209289 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0920 20:48:58.209389 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube162044716 /var/lib/minikube/certs/apiserver.key
I0920 20:48:58.216987 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0920 20:48:58.217083 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2325120447 /var/lib/minikube/certs/proxy-client.crt
I0920 20:48:58.224472 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0920 20:48:58.224597 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2687605920 /var/lib/minikube/certs/proxy-client.key
I0920 20:48:58.232141 20180 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0920 20:48:58.232157 20180 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.232185 20180 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.239289 20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0920 20:48:58.239416 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube433193939 /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.246560 20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0920 20:48:58.246662 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2452136042 /var/lib/minikube/kubeconfig
I0920 20:48:58.254606 20180 exec_runner.go:51] Run: openssl version
I0920 20:48:58.257178 20180 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0920 20:48:58.264902 20180 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.266234 20180 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.266267 20180 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0920 20:48:58.268867 20180 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0920 20:48:58.276058 20180 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0920 20:48:58.277096 20180 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0920 20:48:58.277142 20180 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0920 20:48:58.277254 20180 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0920 20:48:58.291940 20180 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0920 20:48:58.299534 20180 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0920 20:48:58.306888 20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0920 20:48:58.326413 20180 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0920 20:48:58.334289 20180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0920 20:48:58.334307 20180 kubeadm.go:157] found existing configuration files:
I0920 20:48:58.334341 20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0920 20:48:58.341740 20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0920 20:48:58.341780 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0920 20:48:58.348835 20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0920 20:48:58.357142 20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0920 20:48:58.357180 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0920 20:48:58.363861 20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0920 20:48:58.413600 20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0920 20:48:58.413693 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0920 20:48:58.421260 20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0920 20:48:58.428722 20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0920 20:48:58.428766 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0920 20:48:58.435748 20180 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0920 20:48:58.465965 20180 kubeadm.go:310] W0920 20:48:58.465858 21056 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 20:48:58.466421 20180 kubeadm.go:310] W0920 20:48:58.466370 21056 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0920 20:48:58.467888 20180 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0920 20:48:58.467938 20180 kubeadm.go:310] [preflight] Running pre-flight checks
I0920 20:48:58.560728 20180 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0920 20:48:58.560834 20180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0920 20:48:58.560847 20180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0920 20:48:58.560854 20180 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0920 20:48:58.570530 20180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0920 20:48:58.573372 20180 out.go:235] - Generating certificates and keys ...
I0920 20:48:58.573412 20180 kubeadm.go:310] [certs] Using existing ca certificate authority
I0920 20:48:58.573426 20180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0920 20:48:58.698470 20180 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0920 20:48:59.055617 20180 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0920 20:48:59.200841 20180 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0920 20:48:59.317020 20180 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0920 20:48:59.471007 20180 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0920 20:48:59.471166 20180 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0920 20:48:59.614666 20180 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0920 20:48:59.614792 20180 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0920 20:48:59.873414 20180 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0920 20:49:00.003158 20180 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0920 20:49:00.166154 20180 kubeadm.go:310] [certs] Generating "sa" key and public key
I0920 20:49:00.166294 20180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0920 20:49:00.398511 20180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0920 20:49:00.782639 20180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0920 20:49:00.958242 20180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0920 20:49:01.138387 20180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0920 20:49:01.256933 20180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0920 20:49:01.258059 20180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0920 20:49:01.260233 20180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0920 20:49:01.262332 20180 out.go:235] - Booting up control plane ...
I0920 20:49:01.262352 20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0920 20:49:01.262368 20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0920 20:49:01.262885 20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0920 20:49:01.283211 20180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0920 20:49:01.287552 20180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0920 20:49:01.287583 20180 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0920 20:49:01.531980 20180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0920 20:49:01.532006 20180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0920 20:49:02.033468 20180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.485875ms
I0920 20:49:02.033493 20180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0920 20:49:06.535448 20180 kubeadm.go:310] [api-check] The API server is healthy after 4.501949606s
I0920 20:49:06.545969 20180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0920 20:49:06.557760 20180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0920 20:49:06.574267 20180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0920 20:49:06.574296 20180 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0920 20:49:06.582313 20180 kubeadm.go:310] [bootstrap-token] Using token: 3685jq.dvyml113fme7q15o
I0920 20:49:06.583713 20180 out.go:235] - Configuring RBAC rules ...
I0920 20:49:06.583745 20180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0920 20:49:06.587423 20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0920 20:49:06.592723 20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0920 20:49:06.595220 20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0920 20:49:06.597448 20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0920 20:49:06.599678 20180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0920 20:49:06.941105 20180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0920 20:49:07.360974 20180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0920 20:49:07.941239 20180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0920 20:49:07.942150 20180 kubeadm.go:310]
I0920 20:49:07.942170 20180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0920 20:49:07.942175 20180 kubeadm.go:310]
I0920 20:49:07.942180 20180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0920 20:49:07.942184 20180 kubeadm.go:310]
I0920 20:49:07.942189 20180 kubeadm.go:310] mkdir -p $HOME/.kube
I0920 20:49:07.942193 20180 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0920 20:49:07.942196 20180 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0920 20:49:07.942200 20180 kubeadm.go:310]
I0920 20:49:07.942203 20180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0920 20:49:07.942221 20180 kubeadm.go:310]
I0920 20:49:07.942227 20180 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0920 20:49:07.942231 20180 kubeadm.go:310]
I0920 20:49:07.942235 20180 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0920 20:49:07.942239 20180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0920 20:49:07.942244 20180 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0920 20:49:07.942259 20180 kubeadm.go:310]
I0920 20:49:07.942267 20180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0920 20:49:07.942271 20180 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0920 20:49:07.942275 20180 kubeadm.go:310]
I0920 20:49:07.942279 20180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3685jq.dvyml113fme7q15o \
I0920 20:49:07.942282 20180 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fb1381ab3e8d15f0a6b8994a90c93d97d2e6ae809c49b3ec6993e5295be6567a \
I0920 20:49:07.942285 20180 kubeadm.go:310] --control-plane
I0920 20:49:07.942288 20180 kubeadm.go:310]
I0920 20:49:07.942291 20180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0920 20:49:07.942294 20180 kubeadm.go:310]
I0920 20:49:07.942296 20180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3685jq.dvyml113fme7q15o \
I0920 20:49:07.942299 20180 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fb1381ab3e8d15f0a6b8994a90c93d97d2e6ae809c49b3ec6993e5295be6567a
I0920 20:49:07.945099 20180 cni.go:84] Creating CNI manager for ""
I0920 20:49:07.945120 20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0920 20:49:07.947027 20180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0920 20:49:07.948344 20180 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0920 20:49:07.958317 20180 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0920 20:49:07.958437 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2473423273 /etc/cni/net.d/1-k8s.conflist
I0920 20:49:07.968796 20180 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0920 20:49:07.968856 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:07.968919 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T20_49_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0920 20:49:07.977847 20180 ops.go:34] apiserver oom_adj: -16
I0920 20:49:08.036376 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:08.536914 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:09.036469 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:09.537460 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:10.036923 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:10.537385 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:11.037234 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:11.537386 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:12.036828 20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0920 20:49:12.097927 20180 kubeadm.go:1113] duration metric: took 4.129125261s to wait for elevateKubeSystemPrivileges
I0920 20:49:12.097961 20180 kubeadm.go:394] duration metric: took 13.820824797s to StartCluster
I0920 20:49:12.097980 20180 settings.go:142] acquiring lock: {Name:mkffd6871e00198385cdf47f230b5743b288e4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:49:12.098055 20180 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19672-9477/kubeconfig
I0920 20:49:12.098678 20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/kubeconfig: {Name:mk42d63689d61c382c93256ce59e3b499a97143c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0920 20:49:12.098911 20180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0920 20:49:12.098900 20180 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0920 20:49:12.099039 20180 addons.go:69] Setting yakd=true in profile "minikube"
I0920 20:49:12.099059 20180 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0920 20:49:12.099066 20180 addons.go:234] Setting addon yakd=true in "minikube"
I0920 20:49:12.099057 20180 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0920 20:49:12.099072 20180 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0920 20:49:12.099079 20180 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0920 20:49:12.099094 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099106 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099107 20180 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 20:49:12.099112 20180 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0920 20:49:12.099126 20180 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0920 20:49:12.099153 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099160 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099165 20180 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0920 20:49:12.099176 20180 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0920 20:49:12.099180 20180 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0920 20:49:12.099192 20180 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0920 20:49:12.099200 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099262 20180 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0920 20:49:12.099276 20180 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0920 20:49:12.099299 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099717 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099735 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.099771 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.099778 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099788 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099041 20180 addons.go:69] Setting metrics-server=true in profile "minikube"
I0920 20:49:12.099801 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.099811 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099827 20180 addons.go:69] Setting registry=true in profile "minikube"
I0920 20:49:12.099792 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.099837 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.099840 20180 addons.go:234] Setting addon registry=true in "minikube"
I0920 20:49:12.099858 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099859 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099813 20180 addons.go:234] Setting addon metrics-server=true in "minikube"
I0920 20:49:12.099049 20180 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0920 20:49:12.099872 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.099067 20180 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0920 20:49:12.099159 20180 addons.go:69] Setting volcano=true in profile "minikube"
I0920 20:49:12.099888 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.099895 20180 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0920 20:49:12.099899 20180 addons.go:234] Setting addon volcano=true in "minikube"
I0920 20:49:12.099884 20180 mustload.go:65] Loading cluster: minikube
I0920 20:49:12.099913 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099919 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.099829 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.100072 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.100079 20180 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 20:49:12.099897 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.099899 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.100474 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.100495 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.100506 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.100524 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.100531 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.100539 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.099053 20180 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0920 20:49:12.100617 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.100722 20180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0920 20:49:12.101820 20180 out.go:177] * Configuring local host environment ...
I0920 20:49:12.099889 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.102587 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.102711 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.102712 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.102727 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.102749 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.102766 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.099867 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.103261 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.102862 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.103337 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.103374 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.099864 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0920 20:49:12.104569 20180 out.go:270] *
W0920 20:49:12.104619 20180 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0920 20:49:12.104628 20180 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0920 20:49:12.104634 20180 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0920 20:49:12.104649 20180 out.go:270] *
W0920 20:49:12.104696 20180 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0920 20:49:12.104708 20180 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0920 20:49:12.104716 20180 out.go:270] *
W0920 20:49:12.104742 20180 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0920 20:49:12.104753 20180 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0920 20:49:12.104759 20180 out.go:270] *
W0920 20:49:12.104765 20180 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0920 20:49:12.104791 20180 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0920 20:49:12.099863 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.106006 20180 out.go:177] * Verifying Kubernetes components...
I0920 20:49:12.106220 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.106238 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.106296 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.107574 20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0920 20:49:12.121564 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.121695 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.125190 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.126301 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.126627 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.129767 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.130768 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.132044 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.136645 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.138819 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.142974 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.143373 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.143456 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.143507 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.144748 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.144838 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.146715 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.147972 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.148020 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.157887 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.157942 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.158307 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.158352 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.158703 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.158801 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.160003 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.160067 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.160697 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.160746 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.161661 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.161707 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.162020 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.162047 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.165685 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.165737 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.166128 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.166304 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.167414 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.169259 20180 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0920 20:49:12.170553 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 20:49:12.170582 20180 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0920 20:49:12.170720 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1302454244 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0920 20:49:12.176118 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.176142 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.179131 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.179155 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.179509 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.179528 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.180235 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.180280 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.184032 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.184053 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.184133 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.184151 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.184460 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.185207 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.185224 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.187057 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.187924 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.187943 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.188721 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.188886 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.189089 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.189711 20180 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0920 20:49:12.189748 20180 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0920 20:49:12.190730 20180 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0920 20:49:12.191710 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.193237 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.192010 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.192452 20180 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0920 20:49:12.192511 20180 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0920 20:49:12.192769 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.193779 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.194508 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.197341 20180 out.go:177] - Using image docker.io/registry:2.8.3
I0920 20:49:12.197432 20180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0920 20:49:12.197450 20180 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0920 20:49:12.197458 20180 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0920 20:49:12.197496 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 20:49:12.197554 20180 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0920 20:49:12.197582 20180 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0920 20:49:12.197609 20180 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 20:49:12.197684 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0920 20:49:12.197738 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2001933503 /etc/kubernetes/addons/ig-namespace.yaml
I0920 20:49:12.197805 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube651718609 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 20:49:12.198100 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.194401 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.198572 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.198671 20180 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0920 20:49:12.198697 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0920 20:49:12.198822 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube260274205 /etc/kubernetes/addons/registry-rc.yaml
I0920 20:49:12.199005 20180 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0920 20:49:12.199054 20180 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0920 20:49:12.199066 20180 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0920 20:49:12.199087 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.199120 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.199701 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.199715 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.199715 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.199728 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.199744 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.200332 20180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 20:49:12.200424 20180 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0920 20:49:12.200565 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216114389 /etc/kubernetes/addons/metrics-apiservice.yaml
I0920 20:49:12.201418 20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 20:49:12.201438 20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0920 20:49:12.201504 20180 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0920 20:49:12.201536 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube999839601 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0920 20:49:12.203977 20180 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0920 20:49:12.204014 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0920 20:49:12.204772 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2109802716 /etc/kubernetes/addons/volcano-deployment.yaml
I0920 20:49:12.204252 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.205274 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.205779 20180 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0920 20:49:12.205821 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:12.206519 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:12.206537 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:12.206570 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:12.206585 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0920 20:49:12.207755 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0920 20:49:12.207880 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.207910 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.215809 20180 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 20:49:12.215833 20180 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0920 20:49:12.215941 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3509991536 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0920 20:49:12.216958 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.222408 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.222415 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0920 20:49:12.222472 20180 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0920 20:49:12.226511 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0920 20:49:12.226676 20180 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0920 20:49:12.226710 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0920 20:49:12.226847 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2942876505 /etc/kubernetes/addons/deployment.yaml
I0920 20:49:12.227043 20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 20:49:12.227077 20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0920 20:49:12.228009 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2548288831 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0920 20:49:12.234242 20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 20:49:12.234266 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0920 20:49:12.234418 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2274237374 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0920 20:49:12.240569 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0920 20:49:12.246071 20180 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0920 20:49:12.246100 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0920 20:49:12.246104 20180 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0920 20:49:12.246225 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2483380018 /etc/kubernetes/addons/ig-role.yaml
I0920 20:49:12.247487 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0920 20:49:12.247775 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100326101 /etc/kubernetes/addons/storage-provisioner.yaml
I0920 20:49:12.248014 20180 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0920 20:49:12.248244 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.248293 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.248404 20180 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0920 20:49:12.248422 20180 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0920 20:49:12.248525 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4293824614 /etc/kubernetes/addons/registry-svc.yaml
I0920 20:49:12.248648 20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 20:49:12.248665 20180 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0920 20:49:12.249432 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000255514 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0920 20:49:12.250098 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0920 20:49:12.250841 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.250861 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.251219 20180 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0920 20:49:12.253015 20180 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0920 20:49:12.254038 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 20:49:12.254066 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0920 20:49:12.254182 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1677312610 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0920 20:49:12.255607 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.256763 20180 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0920 20:49:12.257961 20180 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0920 20:49:12.257990 20180 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0920 20:49:12.258103 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1538150707 /etc/kubernetes/addons/yakd-ns.yaml
I0920 20:49:12.262679 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0920 20:49:12.263870 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:12.268234 20180 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0920 20:49:12.273116 20180 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0920 20:49:12.273173 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0920 20:49:12.273297 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1322574623 /etc/kubernetes/addons/registry-proxy.yaml
I0920 20:49:12.273126 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 20:49:12.273452 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0920 20:49:12.273856 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563382683 /etc/kubernetes/addons/rbac-hostpath.yaml
I0920 20:49:12.277361 20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 20:49:12.277383 20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0920 20:49:12.277498 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876785540 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0920 20:49:12.277873 20180 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0920 20:49:12.277899 20180 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0920 20:49:12.278005 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4282540638 /etc/kubernetes/addons/yakd-sa.yaml
I0920 20:49:12.278928 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0920 20:49:12.280021 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.280048 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.284158 20180 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 20:49:12.284187 20180 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0920 20:49:12.293523 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.294441 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0920 20:49:12.294786 20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0920 20:49:12.294812 20180 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0920 20:49:12.294938 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1650690386 /etc/kubernetes/addons/metrics-server-service.yaml
I0920 20:49:12.298233 20180 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0920 20:49:12.299601 20180 out.go:177] - Using image docker.io/busybox:stable
I0920 20:49:12.300899 20180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 20:49:12.300932 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0920 20:49:12.301060 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube637399561 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 20:49:12.302414 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2761356044 /etc/kubernetes/addons/ig-rolebinding.yaml
I0920 20:49:12.309076 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 20:49:12.309062 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 20:49:12.309117 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0920 20:49:12.309121 20180 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0920 20:49:12.309241 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1567994081 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0920 20:49:12.309254 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1095153991 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0920 20:49:12.318900 20180 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0920 20:49:12.318932 20180 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0920 20:49:12.319756 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1833948798 /etc/kubernetes/addons/yakd-crb.yaml
I0920 20:49:12.323963 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0920 20:49:12.326747 20180 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 20:49:12.326781 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0920 20:49:12.326983 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2697256451 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 20:49:12.330513 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 20:49:12.330540 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0920 20:49:12.330688 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217740776 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0920 20:49:12.331647 20180 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0920 20:49:12.331676 20180 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0920 20:49:12.331807 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046182493 /etc/kubernetes/addons/yakd-svc.yaml
I0920 20:49:12.334048 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:12.334108 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:12.342080 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 20:49:12.342109 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0920 20:49:12.342229 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube614902354 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0920 20:49:12.347304 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:12.347331 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:12.349550 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0920 20:49:12.352513 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:12.352557 20180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0920 20:49:12.352570 20180 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0920 20:49:12.352577 20180 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0920 20:49:12.352615 20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0920 20:49:12.369524 20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 20:49:12.369558 20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0920 20:49:12.369702 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube847297876 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0920 20:49:12.372596 20180 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 20:49:12.372624 20180 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0920 20:49:12.372830 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411551087 /etc/kubernetes/addons/ig-clusterrole.yaml
I0920 20:49:12.388771 20180 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 20:49:12.388807 20180 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0920 20:49:12.388936 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3895675179 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0920 20:49:12.390019 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 20:49:12.414694 20180 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0920 20:49:12.414872 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3598296078 /etc/kubernetes/addons/storageclass.yaml
I0920 20:49:12.415033 20180 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0920 20:49:12.415059 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0920 20:49:12.415238 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778301694 /etc/kubernetes/addons/yakd-dp.yaml
I0920 20:49:12.430877 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 20:49:12.430922 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0920 20:49:12.431053 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1745240354 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0920 20:49:12.436658 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0920 20:49:12.454128 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0920 20:49:12.476866 20180 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0920 20:49:12.476910 20180 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0920 20:49:12.477056 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050546734 /etc/kubernetes/addons/ig-crd.yaml
I0920 20:49:12.489718 20180 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0920 20:49:12.489752 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0920 20:49:12.489900 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393167977 /etc/kubernetes/addons/ig-daemonset.yaml
I0920 20:49:12.548639 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0920 20:49:12.551582 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 20:49:12.551619 20180 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0920 20:49:12.551745 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1066337254 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0920 20:49:12.565087 20180 exec_runner.go:51] Run: sudo systemctl start kubelet
I0920 20:49:12.607605 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 20:49:12.607649 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0920 20:49:12.607792 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3528681884 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0920 20:49:12.670144 20180 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0920 20:49:12.674496 20180 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0920 20:49:12.674520 20180 node_ready.go:38] duration metric: took 4.343732ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0920 20:49:12.674530 20180 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 20:49:12.683673 20180 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace to be "Ready" ...
I0920 20:49:12.732429 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 20:49:12.732461 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0920 20:49:12.732589 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1828190478 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0920 20:49:12.805333 20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 20:49:12.805373 20180 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0920 20:49:12.805517 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1273143737 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 20:49:12.819406 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0920 20:49:13.019338 20180 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0920 20:49:13.072063 20180 addons.go:475] Verifying addon registry=true in "minikube"
I0920 20:49:13.077935 20180 out.go:177] * Verifying registry addon...
I0920 20:49:13.080656 20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0920 20:49:13.084338 20180 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0920 20:49:13.084358 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:13.518923 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.064727035s)
I0920 20:49:13.521474 20180 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0920 20:49:13.531215 20180 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0920 20:49:13.557133 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.27815973s)
I0920 20:49:13.561720 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.237698784s)
I0920 20:49:13.561753 20180 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0920 20:49:13.594197 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:13.610964 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.062265934s)
I0920 20:49:13.712924 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.363330885s)
I0920 20:49:14.091404 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:14.136946 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.746871797s)
W0920 20:49:14.136986 20180 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 20:49:14.137012 20180 retry.go:31] will retry after 255.414229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0920 20:49:14.392716 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0920 20:49:14.584659 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:14.690327 20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
I0920 20:49:15.085077 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:15.209847 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.959700016s)
I0920 20:49:15.548450 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.728961066s)
I0920 20:49:15.548568 20180 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0920 20:49:15.550458 20180 out.go:177] * Verifying csi-hostpath-driver addon...
I0920 20:49:15.555275 20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 20:49:15.561596 20180 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 20:49:15.561624 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:15.599547 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:15.599945 20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.207182826s)
I0920 20:49:16.060948 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:16.085165 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:16.560715 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:16.584823 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:17.059666 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:17.084960 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:17.190905 20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
I0920 20:49:17.560077 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:17.584168 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:18.060813 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:18.084907 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:18.560581 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:18.584830 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:19.060497 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:19.084109 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:19.206270 20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0920 20:49:19.206442 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4007279464 /var/lib/minikube/google_application_credentials.json
I0920 20:49:19.207428 20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
I0920 20:49:19.216950 20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0920 20:49:19.217066 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3826963202 /var/lib/minikube/google_cloud_project
I0920 20:49:19.226535 20180 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0920 20:49:19.226581 20180 host.go:66] Checking if "minikube" exists ...
I0920 20:49:19.227122 20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0920 20:49:19.227140 20180 api_server.go:166] Checking apiserver status ...
I0920 20:49:19.227168 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:19.244046 20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
I0920 20:49:19.254043 20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
I0920 20:49:19.254095 20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
I0920 20:49:19.262672 20180 api_server.go:204] freezer state: "THAWED"
I0920 20:49:19.262699 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:19.267467 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:19.267524 20180 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0920 20:49:19.372651 20180 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0920 20:49:19.414633 20180 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0920 20:49:19.476975 20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 20:49:19.477030 20180 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0920 20:49:19.477181 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1103847264 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0920 20:49:19.488862 20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 20:49:19.488906 20180 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0920 20:49:19.489009 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube412409584 /etc/kubernetes/addons/gcp-auth-service.yaml
I0920 20:49:19.497585 20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 20:49:19.497613 20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0920 20:49:19.497751 20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2712862313 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 20:49:19.506030 20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0920 20:49:19.562032 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:19.584064 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:20.034225 20180 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0920 20:49:20.035717 20180 out.go:177] * Verifying gcp-auth addon...
I0920 20:49:20.037596 20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0920 20:49:20.039885 20180 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 20:49:20.141938 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:20.142511 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:20.190706 20180 pod_ready.go:93] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:20.190726 20180 pod_ready.go:82] duration metric: took 7.506965168s for pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.190735 20180 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.194284 20180 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:20.194299 20180 pod_ready.go:82] duration metric: took 3.558748ms for pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.194310 20180 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.197746 20180 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:20.197764 20180 pod_ready.go:82] duration metric: took 3.446689ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.197772 20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:20.562909 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:20.584261 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:21.059873 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:21.141452 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:21.202875 20180 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:21.202896 20180 pod_ready.go:82] duration metric: took 1.005117492s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.202905 20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.207001 20180 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:21.207024 20180 pod_ready.go:82] duration metric: took 4.111378ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.207033 20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-75wt4" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.387941 20180 pod_ready.go:93] pod "kube-proxy-75wt4" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:21.387962 20180 pod_ready.go:82] duration metric: took 180.923463ms for pod "kube-proxy-75wt4" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.387972 20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.558946 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:21.583689 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:21.787571 20180 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:21.787607 20180 pod_ready.go:82] duration metric: took 399.628497ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0920 20:49:21.787618 20180 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace to be "Ready" ...
I0920 20:49:22.059764 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:22.084434 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:22.187919 20180 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace has status "Ready":"True"
I0920 20:49:22.187945 20180 pod_ready.go:82] duration metric: took 400.319835ms for pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace to be "Ready" ...
I0920 20:49:22.187954 20180 pod_ready.go:39] duration metric: took 9.513412698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0920 20:49:22.187975 20180 api_server.go:52] waiting for apiserver process to appear ...
I0920 20:49:22.188089 20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0920 20:49:22.205820 20180 api_server.go:72] duration metric: took 10.100993289s to wait for apiserver process to appear ...
I0920 20:49:22.205844 20180 api_server.go:88] waiting for apiserver healthz status ...
I0920 20:49:22.205862 20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0920 20:49:22.210968 20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0920 20:49:22.211783 20180 api_server.go:141] control plane version: v1.31.1
I0920 20:49:22.211807 20180 api_server.go:131] duration metric: took 5.95658ms to wait for apiserver health ...
I0920 20:49:22.211814 20180 system_pods.go:43] waiting for kube-system pods to appear ...
I0920 20:49:22.392952 20180 system_pods.go:59] 16 kube-system pods found
I0920 20:49:22.392979 20180 system_pods.go:61] "coredns-7c65d6cfc9-57rnw" [b1133b0b-cc06-4311-9bb6-50af62e1e360] Running
I0920 20:49:22.392987 20180 system_pods.go:61] "csi-hostpath-attacher-0" [98bcd33a-b03a-418f-b92f-e7b81e582a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 20:49:22.392994 20180 system_pods.go:61] "csi-hostpath-resizer-0" [636162ab-04cb-4555-98d8-a66270a4f1da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 20:49:22.393001 20180 system_pods.go:61] "csi-hostpathplugin-mk5k4" [2a54156b-f734-45c4-aa12-19769dd0e1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 20:49:22.393005 20180 system_pods.go:61] "etcd-ubuntu-20-agent-2" [196b76f8-5e80-4f2d-b234-4683af81fe5f] Running
I0920 20:49:22.393012 20180 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [eda804e9-918a-4243-8b3f-4fff2ded7153] Running
I0920 20:49:22.393018 20180 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [72170d6b-2bba-49ce-a567-b95e08521cca] Running
I0920 20:49:22.393024 20180 system_pods.go:61] "kube-proxy-75wt4" [87279c83-98d0-4c21-8df6-af13deac9832] Running
I0920 20:49:22.393029 20180 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [5bd6f614-0faa-4b39-b5e3-719590323564] Running
I0920 20:49:22.393038 20180 system_pods.go:61] "metrics-server-84c5f94fbc-r8fg4" [c1ae637f-e27e-48fe-96fb-249357137ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 20:49:22.393048 20180 system_pods.go:61] "nvidia-device-plugin-daemonset-9ml89" [3c92ac5c-2c50-4c61-ab43-ddb84a8f39c1] Running
I0920 20:49:22.393055 20180 system_pods.go:61] "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
I0920 20:49:22.393062 20180 system_pods.go:61] "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 20:49:22.393074 20180 system_pods.go:61] "snapshot-controller-56fcc65765-ddrnq" [21dc34c7-3e6b-401e-aa65-5066383310dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 20:49:22.393087 20180 system_pods.go:61] "snapshot-controller-56fcc65765-kgmwp" [f2a0abdd-2a42-402f-9fd2-47318fb4e02d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 20:49:22.393096 20180 system_pods.go:61] "storage-provisioner" [c68d692e-6601-405b-a2c3-f181a8053b18] Running
I0920 20:49:22.393108 20180 system_pods.go:74] duration metric: took 181.285588ms to wait for pod list to return data ...
I0920 20:49:22.393120 20180 default_sa.go:34] waiting for default service account to be created ...
I0920 20:49:22.559499 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:22.584542 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:22.587313 20180 default_sa.go:45] found service account: "default"
I0920 20:49:22.587341 20180 default_sa.go:55] duration metric: took 194.213859ms for default service account to be created ...
I0920 20:49:22.587351 20180 system_pods.go:116] waiting for k8s-apps to be running ...
I0920 20:49:22.794770 20180 system_pods.go:86] 16 kube-system pods found
I0920 20:49:22.794802 20180 system_pods.go:89] "coredns-7c65d6cfc9-57rnw" [b1133b0b-cc06-4311-9bb6-50af62e1e360] Running
I0920 20:49:22.794811 20180 system_pods.go:89] "csi-hostpath-attacher-0" [98bcd33a-b03a-418f-b92f-e7b81e582a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0920 20:49:22.794819 20180 system_pods.go:89] "csi-hostpath-resizer-0" [636162ab-04cb-4555-98d8-a66270a4f1da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0920 20:49:22.794829 20180 system_pods.go:89] "csi-hostpathplugin-mk5k4" [2a54156b-f734-45c4-aa12-19769dd0e1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0920 20:49:22.794838 20180 system_pods.go:89] "etcd-ubuntu-20-agent-2" [196b76f8-5e80-4f2d-b234-4683af81fe5f] Running
I0920 20:49:22.794844 20180 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [eda804e9-918a-4243-8b3f-4fff2ded7153] Running
I0920 20:49:22.794854 20180 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [72170d6b-2bba-49ce-a567-b95e08521cca] Running
I0920 20:49:22.794861 20180 system_pods.go:89] "kube-proxy-75wt4" [87279c83-98d0-4c21-8df6-af13deac9832] Running
I0920 20:49:22.794870 20180 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [5bd6f614-0faa-4b39-b5e3-719590323564] Running
I0920 20:49:22.794879 20180 system_pods.go:89] "metrics-server-84c5f94fbc-r8fg4" [c1ae637f-e27e-48fe-96fb-249357137ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0920 20:49:22.794887 20180 system_pods.go:89] "nvidia-device-plugin-daemonset-9ml89" [3c92ac5c-2c50-4c61-ab43-ddb84a8f39c1] Running
I0920 20:49:22.794893 20180 system_pods.go:89] "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
I0920 20:49:22.794903 20180 system_pods.go:89] "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0920 20:49:22.794912 20180 system_pods.go:89] "snapshot-controller-56fcc65765-ddrnq" [21dc34c7-3e6b-401e-aa65-5066383310dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 20:49:22.794928 20180 system_pods.go:89] "snapshot-controller-56fcc65765-kgmwp" [f2a0abdd-2a42-402f-9fd2-47318fb4e02d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0920 20:49:22.794938 20180 system_pods.go:89] "storage-provisioner" [c68d692e-6601-405b-a2c3-f181a8053b18] Running
I0920 20:49:22.794947 20180 system_pods.go:126] duration metric: took 207.589673ms to wait for k8s-apps to be running ...
I0920 20:49:22.794960 20180 system_svc.go:44] waiting for kubelet service to be running ....
I0920 20:49:22.795013 20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0920 20:49:22.811727 20180 system_svc.go:56] duration metric: took 16.7576ms WaitForService to wait for kubelet
I0920 20:49:22.811753 20180 kubeadm.go:582] duration metric: took 10.706935564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0920 20:49:22.811777 20180 node_conditions.go:102] verifying NodePressure condition ...
I0920 20:49:22.988476 20180 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0920 20:49:22.988504 20180 node_conditions.go:123] node cpu capacity is 8
I0920 20:49:22.988518 20180 node_conditions.go:105] duration metric: took 176.735395ms to run NodePressure ...
I0920 20:49:22.988532 20180 start.go:241] waiting for startup goroutines ...
I0920 20:49:23.142176 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:23.142716 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:23.559072 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:23.583870 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:24.059407 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:24.084140 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:24.559979 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:24.585021 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:25.059282 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:25.084054 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:25.559686 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:25.584865 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:26.059722 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:26.084214 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:26.559876 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:26.583788 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:27.141240 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:27.142011 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:27.558537 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:27.584205 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0920 20:49:28.059625 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:28.084440 20180 kapi.go:107] duration metric: took 15.003786275s to wait for kubernetes.io/minikube-addons=registry ...
I0920 20:49:28.559950 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:29.059527 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:29.559576 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:30.059545 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:30.559554 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:31.059522 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:31.559251 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:32.060076 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:32.560020 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:33.059567 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:33.560146 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:34.060619 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:34.559652 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:35.143121 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:35.560456 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:36.060470 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:36.642344 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:37.059474 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:37.560679 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:38.060023 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:38.560089 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:39.059921 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:39.560644 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:40.059638 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:40.559556 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:41.059296 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:41.558907 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:42.059996 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:42.560097 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:43.059593 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:43.559470 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:44.059934 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:44.559450 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:45.060371 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:45.559584 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:46.060294 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:46.559498 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:47.059608 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:47.558665 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:48.059700 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:48.559600 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:49.060332 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:49.559405 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:50.059518 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0920 20:49:50.559382 20180 kapi.go:107] duration metric: took 35.004106999s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0920 20:50:01.540874 20180 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0920 20:50:01.540897 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:02.040748 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:02.540823 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:03.040325 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:03.540759 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:04.040609 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:04.540106 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:05.040623 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:05.540641 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:06.040897 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:06.541029 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:07.040939 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:07.540655 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:08.040731 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:08.540507 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:09.040493 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:09.540219 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:10.041262 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:10.541109 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:11.040996 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:11.541086 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:12.041417 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:12.540882 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:13.040422 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:13.540780 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:14.040500 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:14.540390 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:15.040461 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:15.540641 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:16.040207 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:16.541485 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:17.040612 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:17.540798 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:18.040321 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:18.541874 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:19.040682 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:19.540638 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:20.040435 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:20.541453 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:21.041455 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:21.540702 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:22.040941 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:22.541300 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:23.041168 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:23.541850 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:24.041012 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:24.541224 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:25.040713 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:25.540492 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:26.040396 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:26.541688 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:27.040702 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:27.540456 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:28.041459 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:28.541275 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:29.041115 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:29.541509 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:30.040443 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:30.540322 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:31.041473 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:31.541749 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:32.040634 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:32.540694 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:33.040331 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:33.541745 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:34.040242 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:34.540854 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:35.041111 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:35.541616 20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0920 20:50:36.040825 20180 kapi.go:107] duration metric: took 1m16.003230373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0920 20:50:36.042199 20180 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0920 20:50:36.043734 20180 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0920 20:50:36.045036 20180 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0920 20:50:36.046454 20180 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, storage-provisioner, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0920 20:50:36.047753 20180 addons.go:510] duration metric: took 1m23.948838661s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd storage-provisioner metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0920 20:50:36.047793 20180 start.go:246] waiting for cluster config update ...
I0920 20:50:36.047812 20180 start.go:255] writing updated cluster config ...
I0920 20:50:36.048058 20180 exec_runner.go:51] Run: rm -f paused
I0920 20:50:36.092790 20180 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0920 20:50:36.094745 20180 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Sun 2024-08-11 20:18:06 UTC, end at Fri 2024-09-20 21:00:29 UTC. --
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962558882Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962561057Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962604927Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.965715963Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.966637940Z" level=error msg="Error running exec 6e113b963b231d7177e751d503342850c5e02ae2f42429ba0af4a6e155022557 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=e95e76e34ea70200 traceID=9402058185d7e293cfd6ef2ba267e970
Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.967401011Z" level=error msg="Error running exec fada8c174c17ace1cd65a5baf88ebbdef8d1c882eb02eab9b011cace3a00e0b3 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=527159cc85c876eb traceID=9d1a497f017f2b008ea31e5608e80e16
Sep 20 20:52:54 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:54.088003253Z" level=info msg="ignoring event" container=fd90c5fca3ff49e78f373fb6cccc4060cccd9afab8a3bd4285e7f722037e889d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 20:54:10 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:54:10.523337119Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d9fe50539ac6f93b traceID=f768be1f70494f63a79136bfd6868a6d
Sep 20 20:54:10 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:54:10.525602317Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d9fe50539ac6f93b traceID=f768be1f70494f63a79136bfd6868a6d
Sep 20 20:55:43 ubuntu-20-agent-2 cri-dockerd[20726]: time="2024-09-20T20:55:43Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 20 20:55:45 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:55:45.025080059Z" level=info msg="ignoring event" container=c9953521c9f2f80ce46f7971e85bc1a98d4eb3d6048c565b569c5e8d1e1b8798 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 20:56:57 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:56:57.514686543Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5003dc7d3b6ce8a2 traceID=966b363656487a737fd4a8841e1e1915
Sep 20 20:56:57 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:56:57.516779578Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5003dc7d3b6ce8a2 traceID=966b363656487a737fd4a8841e1e1915
Sep 20 20:59:28 ubuntu-20-agent-2 cri-dockerd[20726]: time="2024-09-20T20:59:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0173f138bc1cbd539d8594e12112c29ea1f83fe0f7638f1d996543fe1cb6223/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 20 20:59:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:28.813227584Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=97651ed730d07bac traceID=baf66a42bdb0bae3a9f03742c7abca9e
Sep 20 20:59:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:28.815384051Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=97651ed730d07bac traceID=baf66a42bdb0bae3a9f03742c7abca9e
Sep 20 20:59:41 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:41.512570585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=13bb970a5ed7d6cf traceID=8ce131ae86e7b37817f8fa0c5630040d
Sep 20 20:59:41 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:41.514642562Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=13bb970a5ed7d6cf traceID=8ce131ae86e7b37817f8fa0c5630040d
Sep 20 21:00:07 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:07.526057946Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5580e70f5399aa26 traceID=51dea950d9bdf9d58e8e02bc2e9d9896
Sep 20 21:00:07 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:07.527996265Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5580e70f5399aa26 traceID=51dea950d9bdf9d58e8e02bc2e9d9896
Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.280380779Z" level=info msg="ignoring event" container=b0173f138bc1cbd539d8594e12112c29ea1f83fe0f7638f1d996543fe1cb6223 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.534500118Z" level=info msg="ignoring event" container=0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.595581808Z" level=info msg="ignoring event" container=d1396593cca733b6117d9ab7c080b88d501b9bd6f43afc8c16f73e10c030a92f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.671792408Z" level=info msg="ignoring event" container=af9dabc4e6d9a9c8461ee74356ed9ac51541a5ed6cc15d402552e32183280c48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.762599246Z" level=info msg="ignoring event" container=9d38c4f2f625c2dd96754658624817af16824e476c060af8268bb91d095d16e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c9953521c9f2f ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 0f5e1a1a31de5 gadget-lx8nd
b354a4ea6d705 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 cd4ad8b11b987 gcp-auth-89d5ffd79-6krl6
ebc759a9198ae registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 a83271d199de3 csi-hostpathplugin-mk5k4
635f7e156c847 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 a83271d199de3 csi-hostpathplugin-mk5k4
85a852bb542fd registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 a83271d199de3 csi-hostpathplugin-mk5k4
440b83db0b0d2 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 a83271d199de3 csi-hostpathplugin-mk5k4
e89cbc4018197 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 a83271d199de3 csi-hostpathplugin-mk5k4
8d00a07f2e450 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 a83271d199de3 csi-hostpathplugin-mk5k4
80b6797aac0a4 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 b93b6f67cb3c7 csi-hostpath-resizer-0
086d99ee270a7 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 4b2bdebbbd96b csi-hostpath-attacher-0
6ca19f9fa10ec registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 0403f9d19821a snapshot-controller-56fcc65765-ddrnq
b89b630027d5b registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 b213ac584f018 snapshot-controller-56fcc65765-kgmwp
748c7cb78fa6d rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 d8a9eaaea6174 local-path-provisioner-86d989889c-dw98n
52f90e6f068aa marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 ccf9744afe5db yakd-dashboard-67d98fc6b-q69lb
39813f2eae876 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 2305dc05cbc0e metrics-server-84c5f94fbc-r8fg4
a1749f4e50828 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 11 minutes ago Running cloud-spanner-emulator 0 dff843451ce46 cloud-spanner-emulator-769b77f747-ndkcz
708e6656f04e2 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 8498f9d26f31e nvidia-device-plugin-daemonset-9ml89
de5fb063acdb4 6e38f40d628db 11 minutes ago Running storage-provisioner 0 818dd6b99d02c storage-provisioner
f12058db504ad c69fa2e9cbf5f 11 minutes ago Running coredns 0 7928101a55ffb coredns-7c65d6cfc9-57rnw
6f9a1d6bb9e44 60c005f310ff3 11 minutes ago Running kube-proxy 0 75fdba180bb92 kube-proxy-75wt4
cbd580b353bf3 9aa1fad941575 11 minutes ago Running kube-scheduler 0 20ae3a9930237 kube-scheduler-ubuntu-20-agent-2
b48b9c8f5139d 2e96e5913fc06 11 minutes ago Running etcd 0 ce4bb8805e454 etcd-ubuntu-20-agent-2
47bfaead87737 6bab7719df100 11 minutes ago Running kube-apiserver 0 bf5d95c8dd763 kube-apiserver-ubuntu-20-agent-2
221d32dd9bb7b 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 c5fda72bf74f5 kube-controller-manager-ubuntu-20-agent-2
==> coredns [f12058db504a] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:40160 - 41255 "HINFO IN 7543578608229357731.3542518416492414400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021270791s
[INFO] 10.244.0.23:60994 - 17503 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314374s
[INFO] 10.244.0.23:33872 - 20952 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189485s
[INFO] 10.244.0.23:42065 - 7217 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164885s
[INFO] 10.244.0.23:53936 - 5860 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120965s
[INFO] 10.244.0.23:57923 - 55414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001517s
[INFO] 10.244.0.23:59125 - 36506 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158984s
[INFO] 10.244.0.23:57950 - 19416 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.002021348s
[INFO] 10.244.0.23:40576 - 14776 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004461841s
[INFO] 10.244.0.23:36596 - 63525 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003348995s
[INFO] 10.244.0.23:34053 - 9158 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004554138s
[INFO] 10.244.0.23:33644 - 41169 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002420875s
[INFO] 10.244.0.23:54774 - 17567 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004724108s
[INFO] 10.244.0.23:47025 - 21802 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001417634s
[INFO] 10.244.0.23:44966 - 6973 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001430901s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_20T20_49_07_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 20 Sep 2024 20:49:04 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Fri, 20 Sep 2024 21:00:21 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 20 Sep 2024 20:56:16 +0000 Fri, 20 Sep 2024 20:49:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 20 Sep 2024 20:56:16 +0000 Fri, 20 Sep 2024 20:49:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 20 Sep 2024 20:56:16 +0000 Fri, 20 Sep 2024 20:49:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 20 Sep 2024 20:56:16 +0000 Fri, 20 Sep 2024 20:49:05 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: a3d12c8f-1aea-485c-8ba4-0a0207c8ac9f
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default cloud-spanner-emulator-769b77f747-ndkcz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-lx8nd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-6krl6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-57rnw 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-mk5k4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-75wt4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-r8fg4 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-9ml89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-ddrnq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-kgmwp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-dw98n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-q69lb 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m (x8 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 55 32 a5 08 51 08 06
[ +0.022778] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff fa b2 ab 4a 4b e8 08 06
[ +2.648895] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 b9 67 90 ab 9e 08 06
[ +1.679092] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 82 24 d1 8f 70 08 06
[ +2.156338] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 45 0d 8e 8a 9e 08 06
[ +4.496594] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000017] ll header: 00000000: ff ff ff ff ff ff d6 d5 2b eb 38 91 08 06
[ +0.036420] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 e1 43 ce 7b e1 08 06
[ +0.052441] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff da 7d 01 b9 ba c7 08 06
[ +0.954869] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 80 69 78 34 ff 08 06
[Sep20 20:50] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 ce a3 6a 0f 8f 08 06
[ +0.016295] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 2f 2a 61 d1 af 08 06
[ +11.103546] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 25 a7 0b af 85 08 06
[ +0.000485] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 46 19 e2 39 77 08 06
==> etcd [b48b9c8f5139] <==
{"level":"info","ts":"2024-09-20T20:49:03.808957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-09-20T20:49:03.808969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-09-20T20:49:03.808974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-20T20:49:03.808983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-20T20:49:03.808990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-20T20:49:03.809841Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T20:49:03.810410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T20:49:03.810412Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-20T20:49:03.810432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-20T20:49:03.810656Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-20T20:49:03.810679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-20T20:49:03.810703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T20:49:03.810762Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T20:49:03.810788Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-20T20:49:03.811439Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T20:49:03.811529Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-20T20:49:03.812234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-20T20:49:03.812288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-20T20:49:19.946178Z","caller":"traceutil/trace.go:171","msg":"trace[2050446239] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"101.190726ms","start":"2024-09-20T20:49:19.844972Z","end":"2024-09-20T20:49:19.946163Z","steps":["trace[2050446239] 'process raft request' (duration: 101.12898ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T20:49:19.946196Z","caller":"traceutil/trace.go:171","msg":"trace[1027106398] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"101.224332ms","start":"2024-09-20T20:49:19.844953Z","end":"2024-09-20T20:49:19.946177Z","steps":["trace[1027106398] 'process raft request' (duration: 99.459793ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-20T20:49:19.946446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.281807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
{"level":"info","ts":"2024-09-20T20:49:19.946521Z","caller":"traceutil/trace.go:171","msg":"trace[816460777] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:858; }","duration":"100.374802ms","start":"2024-09-20T20:49:19.846136Z","end":"2024-09-20T20:49:19.946511Z","steps":["trace[816460777] 'agreement among raft nodes before linearized reading' (duration: 100.096101ms)"],"step_count":1}
{"level":"info","ts":"2024-09-20T20:59:03.828462Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1722}
{"level":"info","ts":"2024-09-20T20:59:03.851434Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1722,"took":"22.501092ms","hash":2222568896,"current-db-size-bytes":8273920,"current-db-size":"8.3 MB","current-db-size-in-use-bytes":4395008,"current-db-size-in-use":"4.4 MB"}
{"level":"info","ts":"2024-09-20T20:59:03.851474Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2222568896,"revision":1722,"compact-revision":-1}
==> gcp-auth [b354a4ea6d70] <==
2024/09/20 20:50:35 GCP Auth Webhook started!
2024/09/20 20:50:52 Ready to marshal response ...
2024/09/20 20:50:52 Ready to write response ...
2024/09/20 20:50:52 Ready to marshal response ...
2024/09/20 20:50:52 Ready to write response ...
2024/09/20 20:51:15 Ready to marshal response ...
2024/09/20 20:51:15 Ready to write response ...
2024/09/20 20:51:15 Ready to marshal response ...
2024/09/20 20:51:15 Ready to write response ...
2024/09/20 20:51:15 Ready to marshal response ...
2024/09/20 20:51:15 Ready to write response ...
2024/09/20 20:59:28 Ready to marshal response ...
2024/09/20 20:59:28 Ready to write response ...
==> kernel <==
21:00:29 up 42 min, 0 users, load average: 0.17, 0.25, 0.25
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [47bfaead8773] <==
W0920 20:49:54.476588 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.66.30:443: connect: connection refused
W0920 20:50:01.035497 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
E0920 20:50:01.035532 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
W0920 20:50:23.048284 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
E0920 20:50:23.048321 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
W0920 20:50:23.071754 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
E0920 20:50:23.071853 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
I0920 20:50:52.380693 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0920 20:50:52.400608 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0920 20:51:05.767919 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0920 20:51:05.776941 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0920 20:51:05.897090 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 20:51:05.897389 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 20:51:05.902414 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0920 20:51:05.938660 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0920 20:51:06.059330 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 20:51:06.066890 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0920 20:51:06.087978 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0920 20:51:06.910036 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0920 20:51:06.930027 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0920 20:51:06.938910 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0920 20:51:07.088659 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0920 20:51:07.167730 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0920 20:51:07.168338 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0920 20:51:07.285727 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [221d32dd9bb7] <==
W0920 20:59:03.105947 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:03.105994 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:29.251858 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:29.251910 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:35.347173 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:35.347218 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:41.648924 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:41.648964 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:41.938232 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:41.938274 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:45.940463 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:45.940503 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 20:59:47.344577 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 20:59:47.344634 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 21:00:02.599536 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 21:00:02.599577 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 21:00:12.801729 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 21:00:12.801773 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 21:00:13.514435 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 21:00:13.514473 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 21:00:14.561226 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 21:00:14.561267 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0920 21:00:22.071244 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0920 21:00:22.071289 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0920 21:00:28.498540 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.06µs"
==> kube-proxy [6f9a1d6bb9e4] <==
I0920 20:49:13.211922 1 server_linux.go:66] "Using iptables proxy"
I0920 20:49:13.443426 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0920 20:49:13.443503 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0920 20:49:13.521691 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0920 20:49:13.521751 1 server_linux.go:169] "Using iptables Proxier"
I0920 20:49:13.528168 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0920 20:49:13.528641 1 server.go:483] "Version info" version="v1.31.1"
I0920 20:49:13.528666 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0920 20:49:13.531946 1 config.go:199] "Starting service config controller"
I0920 20:49:13.531990 1 shared_informer.go:313] Waiting for caches to sync for service config
I0920 20:49:13.532031 1 config.go:328] "Starting node config controller"
I0920 20:49:13.532037 1 shared_informer.go:313] Waiting for caches to sync for node config
I0920 20:49:13.532253 1 config.go:105] "Starting endpoint slice config controller"
I0920 20:49:13.532266 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0920 20:49:13.633762 1 shared_informer.go:320] Caches are synced for node config
I0920 20:49:13.633817 1 shared_informer.go:320] Caches are synced for service config
I0920 20:49:13.633880 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [cbd580b353bf] <==
W0920 20:49:04.684211 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0920 20:49:04.684228 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:04.684260 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0920 20:49:04.684295 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:04.684359 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 20:49:04.684395 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.506591 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0920 20:49:05.506634 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.538157 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0920 20:49:05.538203 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.569903 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0920 20:49:05.569944 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.583664 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0920 20:49:05.583701 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.640404 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0920 20:49:05.640450 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.644866 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0920 20:49:05.644912 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.676328 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0920 20:49:05.676366 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.685814 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0920 20:49:05.685856 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0920 20:49:05.947602 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0920 20:49:05.947648 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0920 20:49:08.981339 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sun 2024-08-11 20:18:06 UTC, end at Fri 2024-09-20 21:00:29 UTC. --
Sep 20 21:00:13 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:13.372381 21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfb5121c-69b0-4596-89e8-15c4b5558d53"
Sep 20 21:00:18 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:18.372873 21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="17c09791-25f7-43f8-a4f1-1fdd0ce296b2"
Sep 20 21:00:19 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:19.370982 21629 scope.go:117] "RemoveContainer" containerID="c9953521c9f2f80ce46f7971e85bc1a98d4eb3d6048c565b569c5e8d1e1b8798"
Sep 20 21:00:19 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:19.371183 21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-lx8nd_gadget(b0b1fb0a-be7e-4e5a-80cd-fe281bc1a0b0)\"" pod="gadget/gadget-lx8nd" podUID="b0b1fb0a-be7e-4e5a-80cd-fe281bc1a0b0"
Sep 20 21:00:25 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:25.373293 21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfb5121c-69b0-4596-89e8-15c4b5558d53"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471808 21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds\") pod \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\" (UID: \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\") "
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471873 21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvds5\" (UniqueName: \"kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5\") pod \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\" (UID: \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\") "
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471950 21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "17c09791-25f7-43f8-a4f1-1fdd0ce296b2" (UID: "17c09791-25f7-43f8-a4f1-1fdd0ce296b2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.473980 21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5" (OuterVolumeSpecName: "kube-api-access-rvds5") pod "17c09791-25f7-43f8-a4f1-1fdd0ce296b2" (UID: "17c09791-25f7-43f8-a4f1-1fdd0ce296b2"). InnerVolumeSpecName "kube-api-access-rvds5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.572726 21629 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.572759 21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rvds5\" (UniqueName: \"kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.816524 21629 scope.go:117] "RemoveContainer" containerID="d1396593cca733b6117d9ab7c080b88d501b9bd6f43afc8c16f73e10c030a92f"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.833122 21629 scope.go:117] "RemoveContainer" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.851320 21629 scope.go:117] "RemoveContainer" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:28.852126 21629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.852159 21629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"} err="failed to get container status \"0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.874458 21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4bth\" (UniqueName: \"kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth\") pod \"94a85633-fa9f-4487-8730-3b82acd43c17\" (UID: \"94a85633-fa9f-4487-8730-3b82acd43c17\") "
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.876154 21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth" (OuterVolumeSpecName: "kube-api-access-z4bth") pod "94a85633-fa9f-4487-8730-3b82acd43c17" (UID: "94a85633-fa9f-4487-8730-3b82acd43c17"). InnerVolumeSpecName "kube-api-access-z4bth". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.975544 21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zgnl\" (UniqueName: \"kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl\") pod \"260577bf-b43b-4e23-97b2-02d10adfa092\" (UID: \"260577bf-b43b-4e23-97b2-02d10adfa092\") "
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.975730 21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z4bth\" (UniqueName: \"kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.977413 21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl" (OuterVolumeSpecName: "kube-api-access-2zgnl") pod "260577bf-b43b-4e23-97b2-02d10adfa092" (UID: "260577bf-b43b-4e23-97b2-02d10adfa092"). InnerVolumeSpecName "kube-api-access-2zgnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.076656 21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2zgnl\" (UniqueName: \"kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382084 21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c09791-25f7-43f8-a4f1-1fdd0ce296b2" path="/var/lib/kubelet/pods/17c09791-25f7-43f8-a4f1-1fdd0ce296b2/volumes"
Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382313 21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260577bf-b43b-4e23-97b2-02d10adfa092" path="/var/lib/kubelet/pods/260577bf-b43b-4e23-97b2-02d10adfa092/volumes"
Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382627 21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a85633-fa9f-4487-8730-3b82acd43c17" path="/var/lib/kubelet/pods/94a85633-fa9f-4487-8730-3b82acd43c17/volumes"
==> storage-provisioner [de5fb063acdb] <==
I0920 20:49:14.714203 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0920 20:49:14.736172 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0920 20:49:14.736224 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0920 20:49:14.753727 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0920 20:49:14.755157 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad!
I0920 20:49:14.757300 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbeb251e-08da-4635-9e98-8038c240d12c", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad became leader
I0920 20:49:14.856613 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Fri, 20 Sep 2024 20:51:15 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kggpp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-kggpp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m43s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m43s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m43s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m30s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 3m58s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.85s)