=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.75892ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00369895s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004493227s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083023569s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/25 18:41:50 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:39913 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:29 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 25 Sep 24 18:29 UTC | 25 Sep 24 18:30 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | 25 Sep 24 18:30 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 25 Sep 24 18:30 UTC | 25 Sep 24 18:31 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 25 Sep 24 18:32 UTC | 25 Sep 24 18:32 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 25 Sep 24 18:41 UTC | 25 Sep 24 18:41 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 25 Sep 24 18:41 UTC | 25 Sep 24 18:41 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/25 18:30:15
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0925 18:30:15.413570 16451 out.go:345] Setting OutFile to fd 1 ...
I0925 18:30:15.413674 16451 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:30:15.413684 16451 out.go:358] Setting ErrFile to fd 2...
I0925 18:30:15.413689 16451 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0925 18:30:15.413871 16451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19681-5898/.minikube/bin
I0925 18:30:15.414536 16451 out.go:352] Setting JSON to false
I0925 18:30:15.415378 16451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":757,"bootTime":1727288258,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0925 18:30:15.415466 16451 start.go:139] virtualization: kvm guest
I0925 18:30:15.417954 16451 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0925 18:30:15.419367 16451 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19681-5898/.minikube/cache/preloaded-tarball: no such file or directory
I0925 18:30:15.419393 16451 notify.go:220] Checking for updates...
I0925 18:30:15.419420 16451 out.go:177] - MINIKUBE_LOCATION=19681
I0925 18:30:15.420848 16451 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0925 18:30:15.422175 16451 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19681-5898/kubeconfig
I0925 18:30:15.423560 16451 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19681-5898/.minikube
I0925 18:30:15.424974 16451 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0925 18:30:15.426266 16451 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0925 18:30:15.427980 16451 driver.go:394] Setting default libvirt URI to qemu:///system
I0925 18:30:15.439198 16451 out.go:177] * Using the none driver based on user configuration
I0925 18:30:15.440425 16451 start.go:297] selected driver: none
I0925 18:30:15.440436 16451 start.go:901] validating driver "none" against <nil>
I0925 18:30:15.440447 16451 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0925 18:30:15.440503 16451 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0925 18:30:15.440824 16451 out.go:270] ! The 'none' driver does not respect the --memory flag
I0925 18:30:15.441379 16451 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0925 18:30:15.441615 16451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0925 18:30:15.441673 16451 cni.go:84] Creating CNI manager for ""
I0925 18:30:15.441717 16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 18:30:15.441728 16451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0925 18:30:15.441782 16451 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0925 18:30:15.443352 16451 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0925 18:30:15.444923 16451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json ...
I0925 18:30:15.444954 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json: {Name:mkebbdb915de43cc1f93c7d4941d4e2f5bdebd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:15.445067 16451 start.go:360] acquireMachinesLock for minikube: {Name:mk4b40feaa7a9ad6bd04907d48c7a40c739bd823 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0925 18:30:15.445098 16451 start.go:364] duration metric: took 16.826µs to acquireMachinesLock for "minikube"
I0925 18:30:15.445111 16451 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0925 18:30:15.445170 16451 start.go:125] createHost starting for "" (driver="none")
I0925 18:30:15.447357 16451 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0925 18:30:15.448638 16451 exec_runner.go:51] Run: systemctl --version
I0925 18:30:15.451316 16451 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0925 18:30:15.451351 16451 client.go:168] LocalClient.Create starting
I0925 18:30:15.451401 16451 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca.pem
I0925 18:30:15.451428 16451 main.go:141] libmachine: Decoding PEM data...
I0925 18:30:15.451441 16451 main.go:141] libmachine: Parsing certificate...
I0925 18:30:15.451493 16451 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19681-5898/.minikube/certs/cert.pem
I0925 18:30:15.451513 16451 main.go:141] libmachine: Decoding PEM data...
I0925 18:30:15.451521 16451 main.go:141] libmachine: Parsing certificate...
I0925 18:30:15.451810 16451 client.go:171] duration metric: took 452.449µs to LocalClient.Create
I0925 18:30:15.451831 16451 start.go:167] duration metric: took 517.6µs to libmachine.API.Create "minikube"
I0925 18:30:15.451837 16451 start.go:293] postStartSetup for "minikube" (driver="none")
I0925 18:30:15.451884 16451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0925 18:30:15.451913 16451 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0925 18:30:15.461920 16451 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0925 18:30:15.461973 16451 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0925 18:30:15.461997 16451 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0925 18:30:15.464163 16451 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0925 18:30:15.466102 16451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-5898/.minikube/addons for local assets ...
I0925 18:30:15.466161 16451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19681-5898/.minikube/files for local assets ...
I0925 18:30:15.466182 16451 start.go:296] duration metric: took 14.340253ms for postStartSetup
I0925 18:30:15.466740 16451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/config.json ...
I0925 18:30:15.466866 16451 start.go:128] duration metric: took 21.688387ms to createHost
I0925 18:30:15.466883 16451 start.go:83] releasing machines lock for "minikube", held for 21.776148ms
I0925 18:30:15.467176 16451 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0925 18:30:15.467315 16451 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0925 18:30:15.469203 16451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0925 18:30:15.469244 16451 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0925 18:30:15.478926 16451 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0925 18:30:15.478957 16451 start.go:495] detecting cgroup driver to use...
I0925 18:30:15.478987 16451 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0925 18:30:15.479075 16451 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0925 18:30:15.499748 16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0925 18:30:15.509821 16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0925 18:30:15.518843 16451 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0925 18:30:15.518908 16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0925 18:30:15.529118 16451 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 18:30:15.538348 16451 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0925 18:30:15.548024 16451 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 18:30:15.557720 16451 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0925 18:30:15.566261 16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0925 18:30:15.575282 16451 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0925 18:30:15.584608 16451 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0925 18:30:15.594289 16451 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0925 18:30:15.602640 16451 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0925 18:30:15.611672 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:15.828656 16451 exec_runner.go:51] Run: sudo systemctl restart containerd
I0925 18:30:15.894820 16451 start.go:495] detecting cgroup driver to use...
I0925 18:30:15.894869 16451 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0925 18:30:15.894986 16451 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0925 18:30:15.920077 16451 exec_runner.go:51] Run: which cri-dockerd
I0925 18:30:15.921108 16451 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0925 18:30:15.929166 16451 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0925 18:30:15.929187 16451 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0925 18:30:15.929229 16451 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0925 18:30:15.936584 16451 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0925 18:30:15.936737 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2070867925 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0925 18:30:15.944632 16451 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0925 18:30:16.156748 16451 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0925 18:30:16.380931 16451 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0925 18:30:16.381067 16451 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0925 18:30:16.381084 16451 exec_runner.go:203] rm: /etc/docker/daemon.json
I0925 18:30:16.381130 16451 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0925 18:30:16.390141 16451 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0925 18:30:16.390278 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4080595848 /etc/docker/daemon.json
I0925 18:30:16.398604 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:16.602677 16451 exec_runner.go:51] Run: sudo systemctl restart docker
I0925 18:30:16.918425 16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0925 18:30:16.929664 16451 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0925 18:30:16.945369 16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0925 18:30:16.955930 16451 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0925 18:30:17.174981 16451 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0925 18:30:17.379196 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:17.588355 16451 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0925 18:30:17.602383 16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0925 18:30:17.612956 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:17.817627 16451 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0925 18:30:17.884897 16451 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0925 18:30:17.884963 16451 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0925 18:30:17.886290 16451 start.go:563] Will wait 60s for crictl version
I0925 18:30:17.886332 16451 exec_runner.go:51] Run: which crictl
I0925 18:30:17.887320 16451 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0925 18:30:17.915372 16451 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0925 18:30:17.915435 16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0925 18:30:17.935882 16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0925 18:30:17.960939 16451 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0925 18:30:17.961010 16451 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0925 18:30:17.963980 16451 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0925 18:30:17.965377 16451 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0925 18:30:17.965489 16451 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0925 18:30:17.965502 16451 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0925 18:30:17.965576 16451 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0925 18:30:17.965615 16451 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0925 18:30:18.013942 16451 cni.go:84] Creating CNI manager for ""
I0925 18:30:18.013972 16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 18:30:18.013986 16451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0925 18:30:18.014014 16451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0925 18:30:18.014166 16451 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0925 18:30:18.014237 16451 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0925 18:30:18.022441 16451 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0925 18:30:18.022501 16451 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0925 18:30:18.031187 16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0925 18:30:18.031197 16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0925 18:30:18.031235 16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0925 18:30:18.031254 16451 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0925 18:30:18.031264 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0925 18:30:18.031295 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0925 18:30:18.043139 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0925 18:30:18.086651 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3351688123 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0925 18:30:18.089195 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3367296908 /var/lib/minikube/binaries/v1.31.1/kubectl
I0925 18:30:18.107665 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1272728177 /var/lib/minikube/binaries/v1.31.1/kubelet
I0925 18:30:18.178052 16451 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0925 18:30:18.186373 16451 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0925 18:30:18.186394 16451 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0925 18:30:18.186447 16451 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0925 18:30:18.194496 16451 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0925 18:30:18.194658 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1873495123 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0925 18:30:18.202901 16451 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0925 18:30:18.202926 16451 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0925 18:30:18.202964 16451 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0925 18:30:18.211824 16451 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0925 18:30:18.211955 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1913781820 /lib/systemd/system/kubelet.service
I0925 18:30:18.219757 16451 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0925 18:30:18.219874 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3037954303 /var/tmp/minikube/kubeadm.yaml.new
I0925 18:30:18.227853 16451 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0925 18:30:18.229146 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:18.443538 16451 exec_runner.go:51] Run: sudo systemctl start kubelet
I0925 18:30:18.457414 16451 certs.go:68] Setting up /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube for IP: 10.138.0.48
I0925 18:30:18.457442 16451 certs.go:194] generating shared ca certs ...
I0925 18:30:18.457464 16451 certs.go:226] acquiring lock for ca certs: {Name:mk797a982dec8749bfea78088159640624c15ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:18.457631 16451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.key
I0925 18:30:18.457692 16451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.key
I0925 18:30:18.457711 16451 certs.go:256] generating profile certs ...
I0925 18:30:18.457781 16451 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key
I0925 18:30:18.457798 16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt with IP's: []
I0925 18:30:18.618354 16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt ...
I0925 18:30:18.618393 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.crt: {Name:mk81d2343b948c26787d0580b1c74f4e482d640f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:18.618526 16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key ...
I0925 18:30:18.618536 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/client.key: {Name:mk0fb46d259c03bb8651bceba9e8ef242d4f55a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:18.618609 16451 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a
I0925 18:30:18.618623 16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0925 18:30:18.877054 16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0925 18:30:18.877084 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk91734d19d868cb0d88642f455059ce2846399e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:18.877240 16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0925 18:30:18.877253 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk352c45359716af5fc35e6c4c13021fc9cc05bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:18.877309 16451 certs.go:381] copying /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt
I0925 18:30:18.877391 16451 certs.go:385] copying /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key
I0925 18:30:18.877441 16451 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key
I0925 18:30:18.877454 16451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0925 18:30:19.047846 16451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt ...
I0925 18:30:19.047879 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt: {Name:mka7af6981f8a2fac11d40c392a395138807970f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:19.048002 16451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key ...
I0925 18:30:19.048012 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key: {Name:mk5b6b2aebb127988bd38f1ca6fe09ccbc1125dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:19.048183 16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca-key.pem (1679 bytes)
I0925 18:30:19.048215 16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/ca.pem (1082 bytes)
I0925 18:30:19.048240 16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/cert.pem (1123 bytes)
I0925 18:30:19.048264 16451 certs.go:484] found cert: /home/jenkins/minikube-integration/19681-5898/.minikube/certs/key.pem (1675 bytes)
I0925 18:30:19.048891 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0925 18:30:19.049001 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2420333353 /var/lib/minikube/certs/ca.crt
I0925 18:30:19.057542 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0925 18:30:19.057675 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2675950566 /var/lib/minikube/certs/ca.key
I0925 18:30:19.065862 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0925 18:30:19.066034 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2423427720 /var/lib/minikube/certs/proxy-client-ca.crt
I0925 18:30:19.074663 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0925 18:30:19.074785 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2568076541 /var/lib/minikube/certs/proxy-client-ca.key
I0925 18:30:19.082531 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0925 18:30:19.082656 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1739644477 /var/lib/minikube/certs/apiserver.crt
I0925 18:30:19.091467 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0925 18:30:19.091584 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1535756161 /var/lib/minikube/certs/apiserver.key
I0925 18:30:19.099358 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0925 18:30:19.099503 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube877895307 /var/lib/minikube/certs/proxy-client.crt
I0925 18:30:19.108502 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0925 18:30:19.108637 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1602453218 /var/lib/minikube/certs/proxy-client.key
I0925 18:30:19.116168 16451 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0925 18:30:19.116188 16451 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.116226 16451 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.123696 16451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19681-5898/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0925 18:30:19.123826 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3127878063 /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.131447 16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0925 18:30:19.131566 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3162351443 /var/lib/minikube/kubeconfig
I0925 18:30:19.139327 16451 exec_runner.go:51] Run: openssl version
I0925 18:30:19.142189 16451 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0925 18:30:19.150693 16451 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.151970 16451 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.152020 16451 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0925 18:30:19.154861 16451 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0925 18:30:19.162328 16451 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0925 18:30:19.163381 16451 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0925 18:30:19.163417 16451 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0925 18:30:19.163520 16451 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0925 18:30:19.179284 16451 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0925 18:30:19.188795 16451 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 18:30:19.198467 16451 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0925 18:30:19.221416 16451 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 18:30:19.230683 16451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 18:30:19.230706 16451 kubeadm.go:157] found existing configuration files:
I0925 18:30:19.230743 16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0925 18:30:19.239208 16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0925 18:30:19.239274 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0925 18:30:19.248222 16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0925 18:30:19.256534 16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0925 18:30:19.256598 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0925 18:30:19.263820 16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0925 18:30:19.271588 16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0925 18:30:19.271640 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0925 18:30:19.278784 16451 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0925 18:30:19.286504 16451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0925 18:30:19.286570 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0925 18:30:19.293876 16451 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0925 18:30:19.328605 16451 kubeadm.go:310] W0925 18:30:19.328438 17334 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0925 18:30:19.329179 16451 kubeadm.go:310] W0925 18:30:19.329105 17334 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0925 18:30:19.330820 16451 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0925 18:30:19.330970 16451 kubeadm.go:310] [preflight] Running pre-flight checks
I0925 18:30:19.425566 16451 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0925 18:30:19.425692 16451 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0925 18:30:19.425704 16451 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0925 18:30:19.425711 16451 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0925 18:30:19.436960 16451 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0925 18:30:19.439785 16451 out.go:235] - Generating certificates and keys ...
I0925 18:30:19.439829 16451 kubeadm.go:310] [certs] Using existing ca certificate authority
I0925 18:30:19.439842 16451 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0925 18:30:19.572806 16451 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0925 18:30:19.803906 16451 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0925 18:30:20.147115 16451 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0925 18:30:20.231179 16451 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0925 18:30:20.436139 16451 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0925 18:30:20.436276 16451 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0925 18:30:20.543302 16451 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0925 18:30:20.543334 16451 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0925 18:30:20.646090 16451 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0925 18:30:20.948624 16451 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0925 18:30:21.035183 16451 kubeadm.go:310] [certs] Generating "sa" key and public key
I0925 18:30:21.035335 16451 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0925 18:30:21.117747 16451 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0925 18:30:21.229183 16451 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0925 18:30:21.426035 16451 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0925 18:30:21.714766 16451 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0925 18:30:21.784437 16451 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0925 18:30:21.784986 16451 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0925 18:30:21.787213 16451 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0925 18:30:21.790394 16451 out.go:235] - Booting up control plane ...
I0925 18:30:21.790429 16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0925 18:30:21.790453 16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0925 18:30:21.790463 16451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0925 18:30:21.812714 16451 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0925 18:30:21.817285 16451 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0925 18:30:21.817324 16451 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0925 18:30:22.055497 16451 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0925 18:30:22.055521 16451 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0925 18:30:22.557175 16451 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.674211ms
I0925 18:30:22.557197 16451 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0925 18:30:27.059201 16451 kubeadm.go:310] [api-check] The API server is healthy after 4.501987322s
I0925 18:30:27.071740 16451 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0925 18:30:27.082028 16451 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0925 18:30:27.098206 16451 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0925 18:30:27.098226 16451 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0925 18:30:27.104929 16451 kubeadm.go:310] [bootstrap-token] Using token: 5s7ik4.w0bongazummr9xfg
I0925 18:30:27.106133 16451 out.go:235] - Configuring RBAC rules ...
I0925 18:30:27.106170 16451 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0925 18:30:27.108983 16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0925 18:30:27.113924 16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0925 18:30:27.116157 16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0925 18:30:27.119331 16451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0925 18:30:27.121471 16451 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0925 18:30:27.465311 16451 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0925 18:30:27.888943 16451 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0925 18:30:28.466611 16451 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0925 18:30:28.467510 16451 kubeadm.go:310]
I0925 18:30:28.467525 16451 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0925 18:30:28.467529 16451 kubeadm.go:310]
I0925 18:30:28.467533 16451 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0925 18:30:28.467537 16451 kubeadm.go:310]
I0925 18:30:28.467541 16451 kubeadm.go:310] mkdir -p $HOME/.kube
I0925 18:30:28.467544 16451 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0925 18:30:28.467548 16451 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0925 18:30:28.467552 16451 kubeadm.go:310]
I0925 18:30:28.467556 16451 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0925 18:30:28.467560 16451 kubeadm.go:310]
I0925 18:30:28.467564 16451 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0925 18:30:28.467568 16451 kubeadm.go:310]
I0925 18:30:28.467571 16451 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0925 18:30:28.467574 16451 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0925 18:30:28.467577 16451 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0925 18:30:28.467582 16451 kubeadm.go:310]
I0925 18:30:28.467586 16451 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0925 18:30:28.467590 16451 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0925 18:30:28.467594 16451 kubeadm.go:310]
I0925 18:30:28.467598 16451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5s7ik4.w0bongazummr9xfg \
I0925 18:30:28.467602 16451 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:4435e8a35f1d2e45630b26d823949b5678baf780d841b33ba9758e14b1072a05 \
I0925 18:30:28.467606 16451 kubeadm.go:310] --control-plane
I0925 18:30:28.467611 16451 kubeadm.go:310]
I0925 18:30:28.467614 16451 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0925 18:30:28.467619 16451 kubeadm.go:310]
I0925 18:30:28.467630 16451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5s7ik4.w0bongazummr9xfg \
I0925 18:30:28.467632 16451 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:4435e8a35f1d2e45630b26d823949b5678baf780d841b33ba9758e14b1072a05
I0925 18:30:28.470413 16451 cni.go:84] Creating CNI manager for ""
I0925 18:30:28.470439 16451 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 18:30:28.472167 16451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0925 18:30:28.473505 16451 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0925 18:30:28.484343 16451 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0925 18:30:28.484500 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2018657651 /etc/cni/net.d/1-k8s.conflist
I0925 18:30:28.496124 16451 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0925 18:30:28.496216 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:28.496226 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_25T18_30_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0925 18:30:28.505493 16451 ops.go:34] apiserver oom_adj: -16
I0925 18:30:28.564741 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:29.065404 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:29.565096 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:30.065723 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:30.565845 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:31.065434 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:31.565391 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:32.065706 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:32.565139 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:33.065627 16451 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 18:30:33.137849 16451 kubeadm.go:1113] duration metric: took 4.64169258s to wait for elevateKubeSystemPrivileges
I0925 18:30:33.137890 16451 kubeadm.go:394] duration metric: took 13.974475789s to StartCluster
I0925 18:30:33.137913 16451 settings.go:142] acquiring lock: {Name:mk8e263098eda0612fe68e9ecc518f6074bb016b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:33.137991 16451 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19681-5898/kubeconfig
I0925 18:30:33.138769 16451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19681-5898/kubeconfig: {Name:mka83a0d132a79e910021e58a51d0487602d6da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 18:30:33.139110 16451 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0925 18:30:33.139290 16451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0925 18:30:33.139392 16451 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:30:33.139399 16451 addons.go:69] Setting yakd=true in profile "minikube"
I0925 18:30:33.139416 16451 addons.go:234] Setting addon yakd=true in "minikube"
I0925 18:30:33.139446 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.139696 16451 addons.go:69] Setting metrics-server=true in profile "minikube"
I0925 18:30:33.139714 16451 addons.go:234] Setting addon metrics-server=true in "minikube"
I0925 18:30:33.139741 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.140159 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.140174 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.140210 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.140286 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.140296 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.140323 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.140372 16451 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0925 18:30:33.140388 16451 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0925 18:30:33.140418 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.141144 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.141160 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.141192 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.141367 16451 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0925 18:30:33.141384 16451 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0925 18:30:33.141391 16451 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0925 18:30:33.141406 16451 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0925 18:30:33.141427 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.141444 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.142088 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.142104 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.142138 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.142319 16451 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0925 18:30:33.142317 16451 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0925 18:30:33.142364 16451 mustload.go:65] Loading cluster: minikube
I0925 18:30:33.142378 16451 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0925 18:30:33.142410 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.142563 16451 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0925 18:30:33.143068 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.143083 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.143095 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.143111 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.143112 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.143142 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.143598 16451 out.go:177] * Configuring local host environment ...
W0925 18:30:33.146773 16451 out.go:270] *
I0925 18:30:33.147559 16451 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0925 18:30:33.147580 16451 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0925 18:30:33.147611 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.147914 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.147933 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.147959 16451 addons.go:69] Setting volcano=true in profile "minikube"
I0925 18:30:33.148001 16451 addons.go:69] Setting registry=true in profile "minikube"
I0925 18:30:33.148025 16451 addons.go:234] Setting addon registry=true in "minikube"
I0925 18:30:33.148051 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.148003 16451 addons.go:234] Setting addon volcano=true in "minikube"
I0925 18:30:33.148261 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.148717 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.148738 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.148812 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.148877 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.148893 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.149033 16451 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0925 18:30:33.149062 16451 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0925 18:30:33.149097 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.149201 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.149400 16451 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0925 18:30:33.149423 16451 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0925 18:30:33.149792 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.149813 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.149849 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.149924 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.149941 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.149970 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.150161 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.150203 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.150245 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.147972 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.150581 16451 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0925 18:30:33.150602 16451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0925 18:30:33.147980 16451 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0925 18:30:33.150700 16451 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0925 18:30:33.150712 16451 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0925 18:30:33.150722 16451 out.go:270] *
W0925 18:30:33.150836 16451 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0925 18:30:33.150968 16451 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0925 18:30:33.151011 16451 out.go:270] *
W0925 18:30:33.151046 16451 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0925 18:30:33.151173 16451 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0925 18:30:33.151200 16451 out.go:270] *
W0925 18:30:33.151251 16451 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0925 18:30:33.151356 16451 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0925 18:30:33.152827 16451 out.go:177] * Verifying Kubernetes components...
I0925 18:30:33.158805 16451 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0925 18:30:33.161702 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.161730 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.161766 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.174262 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.174268 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.174406 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.175510 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.177960 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.191474 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.191907 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.193930 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.193988 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.195153 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.199342 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.201609 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.201658 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.201898 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.201950 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.202011 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.202030 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.202063 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.203368 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.211876 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.211935 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.212201 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.213803 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.213860 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.217956 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.217996 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.218493 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.218745 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.218796 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.220488 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.220534 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.222647 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.222668 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.224015 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.224055 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.226907 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.226923 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.229659 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.229671 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.229677 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.229688 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.230560 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.232579 16451 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0925 18:30:33.233204 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.233284 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.233906 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.234105 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.234210 16451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0925 18:30:33.234233 16451 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0925 18:30:33.234241 16451 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0925 18:30:33.234276 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0925 18:30:33.234547 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.234731 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.234911 16451 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0925 18:30:33.235491 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.234917 16451 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0925 18:30:33.235518 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.235993 16451 out.go:177] - Using image docker.io/registry:2.8.3
I0925 18:30:33.236787 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.236827 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.236946 16451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0925 18:30:33.236979 16451 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0925 18:30:33.237290 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1939685709 /etc/kubernetes/addons/metrics-apiservice.yaml
I0925 18:30:33.237798 16451 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0925 18:30:33.237828 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0925 18:30:33.237936 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1941677908 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0925 18:30:33.238324 16451 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0925 18:30:33.239437 16451 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0925 18:30:33.239439 16451 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0925 18:30:33.239559 16451 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0925 18:30:33.239682 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3643534877 /etc/kubernetes/addons/yakd-ns.yaml
I0925 18:30:33.240047 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.240091 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.240785 16451 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0925 18:30:33.240820 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0925 18:30:33.240931 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114926721 /etc/kubernetes/addons/registry-rc.yaml
I0925 18:30:33.241266 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.241306 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.242985 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.244427 16451 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0925 18:30:33.245706 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.245745 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.247789 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0925 18:30:33.247926 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1092265232 /etc/kubernetes/addons/storage-provisioner.yaml
I0925 18:30:33.248975 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.249023 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.249562 16451 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0925 18:30:33.249597 16451 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0925 18:30:33.249719 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3056456401 /etc/kubernetes/addons/ig-namespace.yaml
I0925 18:30:33.249993 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.250021 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.250670 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.252264 16451 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0925 18:30:33.252303 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.253042 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.253057 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.253088 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.253306 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.254759 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.254778 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.254864 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.254890 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.257914 16451 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0925 18:30:33.265875 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.268033 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0925 18:30:33.268308 16451 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0925 18:30:33.269258 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0925 18:30:33.269798 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0925 18:30:33.270710 16451 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0925 18:30:33.272512 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0925 18:30:33.274554 16451 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0925 18:30:33.274596 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I0925 18:30:33.275763 16451 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0925 18:30:33.275794 16451 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0925 18:30:33.277158 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube556987342 /etc/kubernetes/addons/registry-svc.yaml
I0925 18:30:33.284193 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.284218 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.284457 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.284481 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.284531 16451 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0925 18:30:33.284554 16451 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0925 18:30:33.285249 16451 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0925 18:30:33.285269 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2650464946 /etc/kubernetes/addons/yakd-sa.yaml
I0925 18:30:33.285275 16451 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0925 18:30:33.285288 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0925 18:30:33.285387 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3669827224 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0925 18:30:33.285415 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2335842946 /etc/kubernetes/addons/volcano-deployment.yaml
I0925 18:30:33.285532 16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0925 18:30:33.285548 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0925 18:30:33.285637 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4231682905 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0925 18:30:33.285755 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.285774 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.286052 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0925 18:30:33.289341 16451 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0925 18:30:33.289739 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.292034 16451 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0925 18:30:33.293564 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.294357 16451 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0925 18:30:33.294651 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.294499 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0925 18:30:33.294920 16451 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0925 18:30:33.295054 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1034850640 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0925 18:30:33.295733 16451 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0925 18:30:33.295777 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:33.296449 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:33.296468 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:33.296675 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:33.296692 16451 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0925 18:30:33.296716 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0925 18:30:33.297325 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1762224384 /etc/kubernetes/addons/registry-proxy.yaml
I0925 18:30:33.298761 16451 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0925 18:30:33.299310 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0925 18:30:33.301360 16451 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0925 18:30:33.301461 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0925 18:30:33.301926 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3220111299 /etc/kubernetes/addons/deployment.yaml
I0925 18:30:33.304768 16451 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0925 18:30:33.311833 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0925 18:30:33.311865 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0925 18:30:33.312009 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3369207112 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0925 18:30:33.312721 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0925 18:30:33.317073 16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0925 18:30:33.317109 16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0925 18:30:33.317225 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2898536084 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0925 18:30:33.319722 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0925 18:30:33.322780 16451 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0925 18:30:33.322798 16451 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0925 18:30:33.322890 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube652790666 /etc/kubernetes/addons/ig-role.yaml
I0925 18:30:33.322994 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.325959 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:33.331232 16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0925 18:30:33.331260 16451 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0925 18:30:33.331383 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3962311012 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0925 18:30:33.331435 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0925 18:30:33.332002 16451 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0925 18:30:33.332021 16451 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0925 18:30:33.332146 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2752837202 /etc/kubernetes/addons/yakd-crb.yaml
I0925 18:30:33.341082 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.341148 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.343651 16451 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0925 18:30:33.343680 16451 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0925 18:30:33.343804 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1729627317 /etc/kubernetes/addons/ig-rolebinding.yaml
I0925 18:30:33.344013 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:33.344097 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:33.348230 16451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0925 18:30:33.348257 16451 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0925 18:30:33.348377 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3115841768 /etc/kubernetes/addons/metrics-server-service.yaml
I0925 18:30:33.356756 16451 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0925 18:30:33.356846 16451 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0925 18:30:33.356978 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3554861798 /etc/kubernetes/addons/yakd-svc.yaml
I0925 18:30:33.360804 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0925 18:30:33.360836 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0925 18:30:33.361005 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3043921755 /etc/kubernetes/addons/rbac-hostpath.yaml
I0925 18:30:33.363785 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.363822 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.368101 16451 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0925 18:30:33.369543 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:33.369593 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:33.369812 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.369858 16451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0925 18:30:33.369874 16451 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0925 18:30:33.369881 16451 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0925 18:30:33.369923 16451 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0925 18:30:33.371773 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0925 18:30:33.374211 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:33.376295 16451 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0925 18:30:33.377754 16451 out.go:177] - Using image docker.io/busybox:stable
I0925 18:30:33.379201 16451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0925 18:30:33.379230 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0925 18:30:33.379369 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4086388198 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0925 18:30:33.380060 16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0925 18:30:33.380065 16451 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0925 18:30:33.380086 16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0925 18:30:33.380091 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0925 18:30:33.380221 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3286914333 /etc/kubernetes/addons/yakd-dp.yaml
I0925 18:30:33.380230 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3148317559 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0925 18:30:33.381395 16451 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0925 18:30:33.381421 16451 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0925 18:30:33.381531 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3314067596 /etc/kubernetes/addons/ig-clusterrole.yaml
I0925 18:30:33.383629 16451 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0925 18:30:33.383753 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4272221833 /etc/kubernetes/addons/storageclass.yaml
I0925 18:30:33.401537 16451 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0925 18:30:33.401608 16451 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0925 18:30:33.401626 16451 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0925 18:30:33.401754 16451 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0925 18:30:33.402797 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube913712496 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0925 18:30:33.402875 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2073121036 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0925 18:30:33.430386 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0925 18:30:33.430431 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0925 18:30:33.430579 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1336820379 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0925 18:30:33.432590 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0925 18:30:33.436641 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0925 18:30:33.443547 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0925 18:30:33.443608 16451 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0925 18:30:33.443764 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3690746514 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0925 18:30:33.445617 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0925 18:30:33.446976 16451 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0925 18:30:33.447014 16451 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0925 18:30:33.447180 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2071725817 /etc/kubernetes/addons/ig-crd.yaml
I0925 18:30:33.471342 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0925 18:30:33.471388 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0925 18:30:33.471519 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2576139560 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0925 18:30:33.523722 16451 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0925 18:30:33.523759 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0925 18:30:33.523886 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1762273621 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0925 18:30:33.595131 16451 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0925 18:30:33.595163 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0925 18:30:33.595289 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3385165694 /etc/kubernetes/addons/ig-daemonset.yaml
I0925 18:30:33.596893 16451 exec_runner.go:51] Run: sudo systemctl start kubelet
I0925 18:30:33.607923 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0925 18:30:33.621789 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0925 18:30:33.621834 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0925 18:30:33.621984 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2508943280 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0925 18:30:33.659245 16451 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0925 18:30:33.669638 16451 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0925 18:30:33.669664 16451 node_ready.go:38] duration metric: took 10.387108ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0925 18:30:33.669677 16451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 18:30:33.687628 16451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace to be "Ready" ...
I0925 18:30:33.762737 16451 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0925 18:30:33.762776 16451 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0925 18:30:33.762933 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1967250124 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0925 18:30:33.763788 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0925 18:30:33.885686 16451 addons.go:475] Verifying addon registry=true in "minikube"
I0925 18:30:33.887628 16451 out.go:177] * Verifying registry addon...
I0925 18:30:33.896383 16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0925 18:30:33.910809 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0925 18:30:33.910852 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0925 18:30:33.911665 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube208056867 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0925 18:30:33.914840 16451 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0925 18:30:33.914869 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:34.070693 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0925 18:30:34.070738 16451 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0925 18:30:34.070878 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1397583527 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0925 18:30:34.093544 16451 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0925 18:30:34.106027 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0925 18:30:34.106065 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0925 18:30:34.106202 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1983209738 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0925 18:30:34.330810 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.045484746s)
I0925 18:30:34.373046 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.001226211s)
I0925 18:30:34.373081 16451 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0925 18:30:34.378844 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0925 18:30:34.378881 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0925 18:30:34.379019 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1702314943 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0925 18:30:34.400871 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:34.422755 16451 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0925 18:30:34.422788 16451 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0925 18:30:34.422934 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2340578506 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0925 18:30:34.456706 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0925 18:30:34.513395 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.080736575s)
I0925 18:30:34.525092 16451 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0925 18:30:34.600867 16451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0925 18:30:34.832021 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.068176903s)
I0925 18:30:34.855999 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.419315225s)
I0925 18:30:34.901634 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:35.415782 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:35.476541 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.868517254s)
W0925 18:30:35.476596 16451 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0925 18:30:35.476627 16451 retry.go:31] will retry after 311.413631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0925 18:30:35.695191 16451 pod_ready.go:103] pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace has status "Ready":"False"
I0925 18:30:35.788442 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0925 18:30:35.911035 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:36.348836 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.036070958s)
I0925 18:30:36.399907 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:36.576229 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.119461859s)
I0925 18:30:36.576323 16451 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0925 18:30:36.579030 16451 out.go:177] * Verifying csi-hostpath-driver addon...
I0925 18:30:36.581745 16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0925 18:30:36.586695 16451 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0925 18:30:36.586721 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:36.921937 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:37.088172 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:37.401251 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:37.586556 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:37.917509 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:38.087190 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:38.193784 16451 pod_ready.go:93] pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:38.193808 16451 pod_ready.go:82] duration metric: took 4.5060921s for pod "coredns-7c65d6cfc9-mgqmr" in "kube-system" namespace to be "Ready" ...
I0925 18:30:38.193840 16451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace to be "Ready" ...
I0925 18:30:38.402079 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:38.587520 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:38.712716 16451 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.924222348s)
I0925 18:30:38.900372 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:39.086506 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:39.400856 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:39.587166 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:39.900692 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:40.088382 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:40.200260 16451 pod_ready.go:103] pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status "Ready":"False"
I0925 18:30:40.262235 16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0925 18:30:40.262381 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2012514082 /var/lib/minikube/google_application_credentials.json
I0925 18:30:40.272537 16451 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0925 18:30:40.272676 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1109561959 /var/lib/minikube/google_cloud_project
I0925 18:30:40.283829 16451 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0925 18:30:40.283916 16451 host.go:66] Checking if "minikube" exists ...
I0925 18:30:40.284669 16451 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0925 18:30:40.284694 16451 api_server.go:166] Checking apiserver status ...
I0925 18:30:40.284730 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:40.302688 16451 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17753/cgroup
I0925 18:30:40.313184 16451 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295"
I0925 18:30:40.313240 16451 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/053a2b8fb3519fc77a185cf8d624d144b0847b1e81e5e1bc37339b8807eb4295/freezer.state
I0925 18:30:40.322825 16451 api_server.go:204] freezer state: "THAWED"
I0925 18:30:40.322856 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:40.326387 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:40.326464 16451 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0925 18:30:40.363360 16451 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0925 18:30:40.399700 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:40.521768 16451 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0925 18:30:40.544448 16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0925 18:30:40.544518 16451 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0925 18:30:40.544695 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube384694292 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0925 18:30:40.554885 16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0925 18:30:40.554915 16451 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0925 18:30:40.555017 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1039535344 /etc/kubernetes/addons/gcp-auth-service.yaml
I0925 18:30:40.565411 16451 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0925 18:30:40.565438 16451 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0925 18:30:40.565541 16451 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2115557671 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0925 18:30:40.576440 16451 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0925 18:30:40.586428 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:41.006444 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:41.088172 16451 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0925 18:30:41.089763 16451 out.go:177] * Verifying gcp-auth addon...
I0925 18:30:41.092100 16451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0925 18:30:41.112891 16451 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0925 18:30:41.115789 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:41.199169 16451 pod_ready.go:98] pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-25 18:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:34 +0000 UTC,FinishedAt:2024-09-25 18:30:40 +0000 UTC,ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc Started:0xc00224a3d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029a2190} {Name:kube-api-access-l6btb MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0029a21a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0925 18:30:41.199205 16451 pod_ready.go:82] duration metric: took 3.005354512s for pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace to be "Ready" ...
E0925 18:30:41.199220 16451 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-pk25b" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-25 18:30:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-25 18:30:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-25 18:30:34 +0000 UTC,FinishedAt:2024-09-25 18:30:40 +0000 UTC,ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b4f8772adf204b5202d7773ae703fc4274d3be3554c4d78f6f3416d87df07fcc Started:0xc00224a3d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0029a2190} {Name:kube-api-access-l6btb MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0029a21a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0925 18:30:41.199231 16451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.203663 16451 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:41.203684 16451 pod_ready.go:82] duration metric: took 4.44637ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.203693 16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.207717 16451 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:41.207735 16451 pod_ready.go:82] duration metric: took 4.035155ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.207743 16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.211714 16451 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:41.211737 16451 pod_ready.go:82] duration metric: took 3.987863ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.211746 16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ms7l" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.218481 16451 pod_ready.go:93] pod "kube-proxy-5ms7l" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:41.218505 16451 pod_ready.go:82] duration metric: took 6.752288ms for pod "kube-proxy-5ms7l" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.218518 16451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.400368 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:41.586150 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:41.597209 16451 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:41.597229 16451 pod_ready.go:82] duration metric: took 378.704419ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.597239 16451 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace to be "Ready" ...
I0925 18:30:41.899772 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:42.086780 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:42.397274 16451 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace has status "Ready":"True"
I0925 18:30:42.397297 16451 pod_ready.go:82] duration metric: took 800.049841ms for pod "nvidia-device-plugin-daemonset-spbz5" in "kube-system" namespace to be "Ready" ...
I0925 18:30:42.397310 16451 pod_ready.go:39] duration metric: took 8.727620075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 18:30:42.397332 16451 api_server.go:52] waiting for apiserver process to appear ...
I0925 18:30:42.397386 16451 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 18:30:42.399575 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:42.414293 16451 api_server.go:72] duration metric: took 9.26288983s to wait for apiserver process to appear ...
I0925 18:30:42.414320 16451 api_server.go:88] waiting for apiserver healthz status ...
I0925 18:30:42.414343 16451 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0925 18:30:42.418653 16451 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0925 18:30:42.419541 16451 api_server.go:141] control plane version: v1.31.1
I0925 18:30:42.419561 16451 api_server.go:131] duration metric: took 5.234825ms to wait for apiserver health ...
I0925 18:30:42.419587 16451 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 18:30:42.587132 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:42.601659 16451 system_pods.go:59] 16 kube-system pods found
I0925 18:30:42.601689 16451 system_pods.go:61] "coredns-7c65d6cfc9-mgqmr" [e6ab0f25-7fe6-4b26-9d11-32ff30994e10] Running
I0925 18:30:42.601697 16451 system_pods.go:61] "csi-hostpath-attacher-0" [ab781cd0-df2a-4298-a481-a690f95ef7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0925 18:30:42.601703 16451 system_pods.go:61] "csi-hostpath-resizer-0" [87eb9560-2857-4f0a-8447-a8a0946867ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0925 18:30:42.601711 16451 system_pods.go:61] "csi-hostpathplugin-8xdw7" [79f4ecb7-c8cd-40ae-8312-bcb3b705657c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0925 18:30:42.601715 16451 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6b5e6912-9880-49db-9181-4908a70236c1] Running
I0925 18:30:42.601719 16451 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [8ee32a0c-c941-40b4-bc62-85f1fda3283b] Running
I0925 18:30:42.601723 16451 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [382a05ae-5914-42c3-af8a-f59edc16429c] Running
I0925 18:30:42.601727 16451 system_pods.go:61] "kube-proxy-5ms7l" [d5e49d6f-dfb6-41fb-9873-45b6e0ce1470] Running
I0925 18:30:42.601733 16451 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [aa16179e-57f0-4ba5-8202-99f4be614808] Running
I0925 18:30:42.601738 16451 system_pods.go:61] "metrics-server-84c5f94fbc-5lrcn" [5acd34ff-05b5-4944-98e6-9578b96dd661] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 18:30:42.601745 16451 system_pods.go:61] "nvidia-device-plugin-daemonset-spbz5" [bb5284d5-47c1-4dd6-8540-374b1dd30ffb] Running
I0925 18:30:42.601751 16451 system_pods.go:61] "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0925 18:30:42.601756 16451 system_pods.go:61] "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0925 18:30:42.601765 16451 system_pods.go:61] "snapshot-controller-56fcc65765-plm79" [fe1144db-150a-4079-b701-d1f55f2e4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0925 18:30:42.601773 16451 system_pods.go:61] "snapshot-controller-56fcc65765-smq7t" [614d5a58-5053-4c5a-a413-b760a6578a07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0925 18:30:42.601778 16451 system_pods.go:61] "storage-provisioner" [c5852ab1-5e02-4a46-8de1-98c989aa3d17] Running
I0925 18:30:42.601787 16451 system_pods.go:74] duration metric: took 182.195666ms to wait for pod list to return data ...
I0925 18:30:42.601796 16451 default_sa.go:34] waiting for default service account to be created ...
I0925 18:30:42.798177 16451 default_sa.go:45] found service account: "default"
I0925 18:30:42.798201 16451 default_sa.go:55] duration metric: took 196.39941ms for default service account to be created ...
I0925 18:30:42.798209 16451 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 18:30:42.900625 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:43.003136 16451 system_pods.go:86] 16 kube-system pods found
I0925 18:30:43.003170 16451 system_pods.go:89] "coredns-7c65d6cfc9-mgqmr" [e6ab0f25-7fe6-4b26-9d11-32ff30994e10] Running
I0925 18:30:43.003183 16451 system_pods.go:89] "csi-hostpath-attacher-0" [ab781cd0-df2a-4298-a481-a690f95ef7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0925 18:30:43.003192 16451 system_pods.go:89] "csi-hostpath-resizer-0" [87eb9560-2857-4f0a-8447-a8a0946867ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0925 18:30:43.003203 16451 system_pods.go:89] "csi-hostpathplugin-8xdw7" [79f4ecb7-c8cd-40ae-8312-bcb3b705657c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0925 18:30:43.003213 16451 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6b5e6912-9880-49db-9181-4908a70236c1] Running
I0925 18:30:43.003220 16451 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [8ee32a0c-c941-40b4-bc62-85f1fda3283b] Running
I0925 18:30:43.003229 16451 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [382a05ae-5914-42c3-af8a-f59edc16429c] Running
I0925 18:30:43.003235 16451 system_pods.go:89] "kube-proxy-5ms7l" [d5e49d6f-dfb6-41fb-9873-45b6e0ce1470] Running
I0925 18:30:43.003243 16451 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [aa16179e-57f0-4ba5-8202-99f4be614808] Running
I0925 18:30:43.003253 16451 system_pods.go:89] "metrics-server-84c5f94fbc-5lrcn" [5acd34ff-05b5-4944-98e6-9578b96dd661] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 18:30:43.003261 16451 system_pods.go:89] "nvidia-device-plugin-daemonset-spbz5" [bb5284d5-47c1-4dd6-8540-374b1dd30ffb] Running
I0925 18:30:43.003271 16451 system_pods.go:89] "registry-66c9cd494c-hjbk7" [a9c82301-9560-4dd9-a31e-55bc04efd0e3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0925 18:30:43.003282 16451 system_pods.go:89] "registry-proxy-wxxhj" [532ef9f6-818b-4628-a77d-5cb0d7ae89b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0925 18:30:43.003294 16451 system_pods.go:89] "snapshot-controller-56fcc65765-plm79" [fe1144db-150a-4079-b701-d1f55f2e4c2d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0925 18:30:43.003303 16451 system_pods.go:89] "snapshot-controller-56fcc65765-smq7t" [614d5a58-5053-4c5a-a413-b760a6578a07] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0925 18:30:43.003314 16451 system_pods.go:89] "storage-provisioner" [c5852ab1-5e02-4a46-8de1-98c989aa3d17] Running
I0925 18:30:43.003324 16451 system_pods.go:126] duration metric: took 205.108575ms to wait for k8s-apps to be running ...
I0925 18:30:43.003335 16451 system_svc.go:44] waiting for kubelet service to be running ....
I0925 18:30:43.003394 16451 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0925 18:30:43.019035 16451 system_svc.go:56] duration metric: took 15.687045ms WaitForService to wait for kubelet
I0925 18:30:43.019066 16451 kubeadm.go:582] duration metric: took 9.867676746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0925 18:30:43.019086 16451 node_conditions.go:102] verifying NodePressure condition ...
I0925 18:30:43.086315 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:43.197637 16451 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0925 18:30:43.197669 16451 node_conditions.go:123] node cpu capacity is 8
I0925 18:30:43.197683 16451 node_conditions.go:105] duration metric: took 178.591645ms to run NodePressure ...
I0925 18:30:43.197698 16451 start.go:241] waiting for startup goroutines ...
I0925 18:30:43.399806 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:43.586530 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:43.900333 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:44.086004 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:44.400666 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:44.585831 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:44.900091 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:45.085909 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:45.400343 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:45.586450 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:45.899780 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:46.087440 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:46.400214 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:46.585782 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:46.901211 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:47.086978 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:47.399664 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:47.586835 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:47.900847 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:48.086798 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:48.401253 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:48.586397 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:48.899930 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:49.086589 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:49.399666 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:49.586203 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:49.900691 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0925 18:30:50.086362 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:50.400399 16451 kapi.go:107] duration metric: took 16.504031233s to wait for kubernetes.io/minikube-addons=registry ...
I0925 18:30:50.586330 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:51.086545 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:51.586860 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:52.086817 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:52.586279 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:53.086239 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:53.629639 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:54.085309 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:54.586129 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:55.086691 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:55.586932 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:56.086559 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:56.586461 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:57.086875 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:57.586312 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:58.086404 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:58.586528 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:59.085287 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:30:59.586449 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:00.086130 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:00.587474 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:01.086369 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:01.585875 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:02.087049 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:02.585712 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:03.086805 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:03.586315 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:04.086086 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:04.586332 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:05.086485 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:05.586589 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:06.086598 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:06.586718 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:07.087132 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:07.586595 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:08.086828 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:08.587140 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:09.087029 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:09.586459 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:10.086701 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:10.586261 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:11.086520 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:11.586679 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:12.086461 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0925 18:31:12.588659 16451 kapi.go:107] duration metric: took 36.006912472s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0925 18:31:22.595405 16451 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0925 18:31:22.595426 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:23.095492 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:23.595620 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:24.095355 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:24.595487 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:25.095234 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:25.595709 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:26.096030 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:26.595208 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:27.095688 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:27.595873 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:28.096239 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:28.595342 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:29.095525 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:29.595810 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:30.096274 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:30.596578 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:31.095487 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:31.595405 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:32.095342 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:32.595175 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:33.094845 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:33.594999 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:34.095122 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:34.595185 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:35.094838 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:35.595437 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:36.095549 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:36.595475 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:37.095409 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:37.595187 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:38.095488 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:38.595435 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:39.095388 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:39.595451 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:40.095719 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:40.596454 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:41.095217 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:41.595253 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:42.095450 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:42.595359 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:43.095211 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:43.595301 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:44.095610 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:44.595777 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:45.095647 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:45.633259 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:46.095076 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:46.594693 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:47.096540 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:47.595385 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:48.095713 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:48.596247 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:49.095323 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:49.595466 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:50.096089 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:50.596910 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:51.096466 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:51.595556 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:52.095331 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:52.595165 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:53.094921 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:53.595125 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:54.095336 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:54.595218 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:55.094994 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:55.596076 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:56.096173 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:56.595329 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:57.095667 16451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0925 18:31:57.595949 16451 kapi.go:107] duration metric: took 1m16.503849056s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0925 18:31:57.597642 16451 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0925 18:31:57.598974 16451 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0925 18:31:57.600212 16451 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0925 18:31:57.601516 16451 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0925 18:31:57.602928 16451 addons.go:510] duration metric: took 1m24.463638703s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner metrics-server yakd inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0925 18:31:57.602969 16451 start.go:246] waiting for cluster config update ...
I0925 18:31:57.602984 16451 start.go:255] writing updated cluster config ...
I0925 18:31:57.603220 16451 exec_runner.go:51] Run: rm -f paused
I0925 18:31:57.646747 16451 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0925 18:31:57.648604 16451 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Fri 2024-08-16 02:18:09 UTC, end at Wed 2024-09-25 18:41:51 UTC. --
Sep 25 18:34:10 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:10.029764857Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=44905079cd82c12f traceID=4b30a805247d77635d593f3c15582d70
Sep 25 18:34:10 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:10.031835361Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=44905079cd82c12f traceID=4b30a805247d77635d593f3c15582d70
Sep 25 18:34:11 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:34:11Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 25 18:34:12 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:34:12.455491143Z" level=info msg="ignoring event" container=4690f8e026dd1169f9c4eec417441ba3c0580aa39970920f7d9655fb78ca7a0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:35:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:35:32.023861114Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4c6ac7a3b758f898 traceID=9b7fab46f0dfdacfe093200bad8a9f5f
Sep 25 18:35:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:35:32.025960844Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4c6ac7a3b758f898 traceID=9b7fab46f0dfdacfe093200bad8a9f5f
Sep 25 18:36:59 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:36:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.358711022Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.358711917Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.360536008Z" level=error msg="Error running exec 344599b9225df211c40d9fca1664129211e67b280541e966b6649781a91b7719 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=cac79247b65b7d3f traceID=5c895f2fb3dfb1ff872ca7c9851e2ebc
Sep 25 18:37:00 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:37:00.547427980Z" level=info msg="ignoring event" container=f67c8f5ba20d6e3d3737a5f718559c84a0d3a952befbebadf13922595920fc5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:38:25 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:38:25.030437812Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=11679319bd599959 traceID=d8422ab2070c00277f3307a23d14b3e9
Sep 25 18:38:25 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:38:25.032445151Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=11679319bd599959 traceID=d8422ab2070c00277f3307a23d14b3e9
Sep 25 18:40:50 ubuntu-20-agent-2 cri-dockerd[16995]: time="2024-09-25T18:40:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/415346bb1fe83d0c7a779f8debf0cb57fc0f23e0b47ed1f7c05540a1180f0d15/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 25 18:40:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:40:50.876102907Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=4c5a19d7991812f4 traceID=a031fb633320747ddd207b1c01d90f59
Sep 25 18:40:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:40:50.878250243Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=4c5a19d7991812f4 traceID=a031fb633320747ddd207b1c01d90f59
Sep 25 18:41:03 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:03.027714176Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=0f9f0fa70d6fc92d traceID=7eeaaec6a12e80f724d576f367038285
Sep 25 18:41:03 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:03.030193818Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=0f9f0fa70d6fc92d traceID=7eeaaec6a12e80f724d576f367038285
Sep 25 18:41:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:32.016791989Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2af4751d728f441a traceID=7aa696718ce0bceba99ede8381916d2a
Sep 25 18:41:32 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:32.018874015Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2af4751d728f441a traceID=7aa696718ce0bceba99ede8381916d2a
Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.339210870Z" level=info msg="ignoring event" container=415346bb1fe83d0c7a779f8debf0cb57fc0f23e0b47ed1f7c05540a1180f0d15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.588970645Z" level=info msg="ignoring event" container=b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.647766580Z" level=info msg="ignoring event" container=eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.732825423Z" level=info msg="ignoring event" container=42aea5973d0dd26d16573ddd320ff9a5160d12b1f078add073c56550007e3438 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 18:41:50 ubuntu-20-agent-2 dockerd[16667]: time="2024-09-25T18:41:50.799032787Z" level=info msg="ignoring event" container=adab75f552f6ca95d8937832b1e4547a1ff5c23d7d19140dc01ed728e99ef4c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
f67c8f5ba20d6 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 1c3d0a26b789e gadget-klq6x
0fc58426b7166 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 d4c31a54c3c06 gcp-auth-89d5ffd79-pnqvf
c5664d9ec2161 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
edfd4f3aeaf2c registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
9d43e4f0e4b21 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
9f5191c90b59d registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
7490bd1f9d575 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
8094f730f1dff registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 728e5af95ebd4 csi-hostpath-resizer-0
d9b117fe732bc registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 bb2b2414aff14 csi-hostpathplugin-8xdw7
e94995bc970cb registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 99ea965287f3e csi-hostpath-attacher-0
758f18afebdc4 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 7ab25bed647bd snapshot-controller-56fcc65765-smq7t
029e206f33ec2 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 e1be083360544 snapshot-controller-56fcc65765-plm79
f3eb330f8e9c8 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 b879f3010c7ec local-path-provisioner-86d989889c-ns2rp
c71523aee2964 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 506664c420562 yakd-dashboard-67d98fc6b-pd9f6
db26d15c05c29 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 11 minutes ago Running metrics-server 0 d32de4c1516f1 metrics-server-84c5f94fbc-5lrcn
eeffae39df349 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 11 minutes ago Exited registry-proxy 0 adab75f552f6c registry-proxy-wxxhj
b1e7a6c95c0b3 registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 11 minutes ago Exited registry 0 42aea5973d0dd registry-66c9cd494c-hjbk7
08a801c6fb611 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 11 minutes ago Running cloud-spanner-emulator 0 374aeae3a445f cloud-spanner-emulator-5b584cc74-t6jwv
5f709a41d3bef nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 9cfaa0d1c61e2 nvidia-device-plugin-daemonset-spbz5
52f5a7a4d3f33 6e38f40d628db 11 minutes ago Running storage-provisioner 0 e85657bb89546 storage-provisioner
13a3bc32ac447 c69fa2e9cbf5f 11 minutes ago Running coredns 0 d52a0a37a47cf coredns-7c65d6cfc9-mgqmr
e6c312f7f1afb 60c005f310ff3 11 minutes ago Running kube-proxy 0 01924e24f8a6d kube-proxy-5ms7l
bd8f95275d938 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 090b4902f6933 kube-controller-manager-ubuntu-20-agent-2
053a2b8fb3519 6bab7719df100 11 minutes ago Running kube-apiserver 0 b521914ef847e kube-apiserver-ubuntu-20-agent-2
4d1918b8b5d82 9aa1fad941575 11 minutes ago Running kube-scheduler 0 e5d3ebfe7120a kube-scheduler-ubuntu-20-agent-2
1b48eaeaff330 2e96e5913fc06 11 minutes ago Running etcd 0 3a9533fc1d87c etcd-ubuntu-20-agent-2
==> coredns [13a3bc32ac44] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:58843 - 56094 "HINFO IN 7210559415351330124.1321847438580936925. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066776742s
[INFO] 10.244.0.23:49930 - 2627 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000381044s
[INFO] 10.244.0.23:60946 - 32438 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191239s
[INFO] 10.244.0.23:43280 - 47948 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134226s
[INFO] 10.244.0.23:41924 - 21466 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000172823s
[INFO] 10.244.0.23:47311 - 41096 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119807s
[INFO] 10.244.0.23:49953 - 3492 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125507s
[INFO] 10.244.0.23:36555 - 2749 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003946393s
[INFO] 10.244.0.23:43512 - 7280 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005400834s
[INFO] 10.244.0.23:35963 - 4796 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00423583s
[INFO] 10.244.0.23:45359 - 7454 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00467304s
[INFO] 10.244.0.23:56422 - 12595 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00353299s
[INFO] 10.244.0.23:54598 - 5313 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005209375s
[INFO] 10.244.0.23:32804 - 48866 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002523889s
[INFO] 10.244.0.23:32831 - 28635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.003957719s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=cb9e6220ecbd737c1d09ad9630c6f144f437664a
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_25T18_30_28_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 25 Sep 2024 18:30:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Wed, 25 Sep 2024 18:41:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 25 Sep 2024 18:37:35 +0000 Wed, 25 Sep 2024 18:30:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 25 Sep 2024 18:37:35 +0000 Wed, 25 Sep 2024 18:30:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 25 Sep 2024 18:37:35 +0000 Wed, 25 Sep 2024 18:30:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 25 Sep 2024 18:37:35 +0000 Wed, 25 Sep 2024 18:30:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: 00a417d5-0a7b-4811-9a16-2ae49d98a388
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default cloud-spanner-emulator-5b584cc74-t6jwv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-klq6x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-pnqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-mgqmr 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-8xdw7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-5ms7l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-5lrcn 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-spbz5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-plm79 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-smq7t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-ns2rp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-pd9f6 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 23 db d7 94 69 08 06
[ +1.056461] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a b1 3e de 4c d0 08 06
[ +0.013100] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 04 03 e8 69 80 08 06
[Sep25 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff fe f6 fc 54 a9 a1 08 06
[ +1.551951] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff 86 c1 cd c3 81 bc 08 06
[ +2.042059] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ba fe c1 8e fe 77 08 06
[ +4.549208] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 13 86 4f d1 5d 08 06
[ +0.000218] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 1a d7 35 36 99 d7 08 06
[ +0.523811] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 ed 10 19 60 ad 08 06
[ +35.123201] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 7c 95 13 db ca 08 06
[ +0.028098] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 13 1f 1d 7b 27 08 06
[ +11.094611] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 cf 8f 57 1e a9 08 06
[ +0.000495] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 04 d9 da 3a b2 08 06
==> etcd [1b48eaeaff33] <==
{"level":"info","ts":"2024-09-25T18:30:24.437007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-25T18:30:24.437019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-25T18:30:24.437973Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-25T18:30:24.438026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-25T18:30:24.438053Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-25T18:30:24.438123Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-25T18:30:24.438164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-25T18:30:24.438199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-25T18:30:24.438744Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-25T18:30:24.438836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-25T18:30:24.438861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-25T18:30:24.439231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-25T18:30:24.439507Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-25T18:30:24.440931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-25T18:30:24.441498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-25T18:30:41.002898Z","caller":"traceutil/trace.go:171","msg":"trace[1815222204] transaction","detail":"{read_only:false; response_revision:822; number_of_response:1; }","duration":"124.633512ms","start":"2024-09-25T18:30:40.878245Z","end":"2024-09-25T18:30:41.002878Z","steps":["trace[1815222204] 'process raft request' (duration: 122.108788ms)"],"step_count":1}
{"level":"info","ts":"2024-09-25T18:30:41.003969Z","caller":"traceutil/trace.go:171","msg":"trace[155609340] linearizableReadLoop","detail":"{readStateIndex:843; appliedIndex:841; }","duration":"124.399571ms","start":"2024-09-25T18:30:40.879557Z","end":"2024-09-25T18:30:41.003956Z","steps":["trace[155609340] 'read index received' (duration: 120.895111ms)","trace[155609340] 'applied index is now lower than readState.Index' (duration: 3.503772ms)"],"step_count":2}
{"level":"info","ts":"2024-09-25T18:30:41.003992Z","caller":"traceutil/trace.go:171","msg":"trace[730131621] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"125.731579ms","start":"2024-09-25T18:30:40.878247Z","end":"2024-09-25T18:30:41.003978Z","steps":["trace[730131621] 'process raft request' (duration: 125.637436ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-25T18:30:41.004128Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.777679ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-25T18:30:41.004198Z","caller":"traceutil/trace.go:171","msg":"trace[1242519320] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:823; }","duration":"105.870533ms","start":"2024-09-25T18:30:40.898317Z","end":"2024-09-25T18:30:41.004187Z","steps":["trace[1242519320] 'agreement among raft nodes before linearized reading' (duration: 105.733612ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-25T18:30:41.004138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.570127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
{"level":"info","ts":"2024-09-25T18:30:41.004289Z","caller":"traceutil/trace.go:171","msg":"trace[2099751762] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:823; }","duration":"124.718851ms","start":"2024-09-25T18:30:40.879553Z","end":"2024-09-25T18:30:41.004272Z","steps":["trace[2099751762] 'agreement among raft nodes before linearized reading' (duration: 124.479819ms)"],"step_count":1}
{"level":"info","ts":"2024-09-25T18:40:24.455745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1687}
{"level":"info","ts":"2024-09-25T18:40:24.479584Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1687,"took":"23.299281ms","hash":1946569372,"current-db-size-bytes":8171520,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4370432,"current-db-size-in-use":"4.4 MB"}
{"level":"info","ts":"2024-09-25T18:40:24.479634Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1946569372,"revision":1687,"compact-revision":-1}
==> gcp-auth [0fc58426b716] <==
2024/09/25 18:31:56 GCP Auth Webhook started!
2024/09/25 18:32:13 Ready to marshal response ...
2024/09/25 18:32:13 Ready to write response ...
2024/09/25 18:32:14 Ready to marshal response ...
2024/09/25 18:32:14 Ready to write response ...
2024/09/25 18:32:37 Ready to marshal response ...
2024/09/25 18:32:37 Ready to write response ...
2024/09/25 18:32:37 Ready to marshal response ...
2024/09/25 18:32:37 Ready to write response ...
2024/09/25 18:32:38 Ready to marshal response ...
2024/09/25 18:32:38 Ready to write response ...
2024/09/25 18:40:50 Ready to marshal response ...
2024/09/25 18:40:50 Ready to write response ...
==> kernel <==
18:41:51 up 24 min, 0 users, load average: 0.25, 0.31, 0.30
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [053a2b8fb351] <==
W0925 18:31:16.026660 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.96.160.240:443: connect: connection refused
W0925 18:31:22.102335 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
E0925 18:31:22.102376 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
W0925 18:31:44.113288 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
E0925 18:31:44.113321 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
W0925 18:31:44.120958 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.228.183:443: connect: connection refused
E0925 18:31:44.120994 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.228.183:443: connect: connection refused" logger="UnhandledError"
I0925 18:32:13.915131 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0925 18:32:13.933383 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0925 18:32:27.312060 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0925 18:32:27.344103 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0925 18:32:27.423906 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0925 18:32:27.467826 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0925 18:32:27.467865 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0925 18:32:27.562345 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0925 18:32:27.634412 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0925 18:32:27.675171 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0925 18:32:27.711737 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0925 18:32:28.371671 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0925 18:32:28.562436 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0925 18:32:28.597713 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0925 18:32:28.664109 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0925 18:32:28.676462 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0925 18:32:28.711858 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0925 18:32:28.913641 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [bd8f95275d93] <==
W0925 18:40:36.388527 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:40:36.388571 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:40:40.392767 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:40:40.392810 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:40:41.105744 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:40:41.105782 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:40:47.737067 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:40:47.737117 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:40:48.472650 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:40:48.472806 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:18.992642 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:18.992693 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:20.676040 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:20.676080 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:23.100012 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:23.100055 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:32.681638 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:32.681679 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:33.792971 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:33.793018 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:37.075731 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:37.075775 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0925 18:41:40.009162 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0925 18:41:40.009210 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0925 18:41:50.557507 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="3.829µs"
==> kube-proxy [e6c312f7f1af] <==
I0925 18:30:34.146452 1 server_linux.go:66] "Using iptables proxy"
I0925 18:30:34.377301 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0925 18:30:34.377378 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0925 18:30:34.536342 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0925 18:30:34.536407 1 server_linux.go:169] "Using iptables Proxier"
I0925 18:30:34.548982 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0925 18:30:34.549346 1 server.go:483] "Version info" version="v1.31.1"
I0925 18:30:34.549372 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0925 18:30:34.580167 1 config.go:199] "Starting service config controller"
I0925 18:30:34.587528 1 shared_informer.go:313] Waiting for caches to sync for service config
I0925 18:30:34.585952 1 config.go:328] "Starting node config controller"
I0925 18:30:34.587566 1 shared_informer.go:313] Waiting for caches to sync for node config
I0925 18:30:34.585904 1 config.go:105] "Starting endpoint slice config controller"
I0925 18:30:34.587577 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0925 18:30:34.687983 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0925 18:30:34.688057 1 shared_informer.go:320] Caches are synced for service config
I0925 18:30:34.688377 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [4d1918b8b5d8] <==
W0925 18:30:25.296955 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0925 18:30:25.296976 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0925 18:30:25.296992 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0925 18:30:25.296998 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:25.296876 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0925 18:30:25.297033 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:25.297106 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0925 18:30:25.297135 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.210605 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0925 18:30:26.210652 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.271081 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0925 18:30:26.271118 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.271876 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0925 18:30:26.271911 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.288211 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0925 18:30:26.288254 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.291468 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0925 18:30:26.291498 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.303998 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0925 18:30:26.304036 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.508998 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0925 18:30:26.509049 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0925 18:30:26.687823 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0925 18:30:26.687868 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0925 18:30:28.595598 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Fri 2024-08-16 02:18:09 UTC, end at Wed 2024-09-25 18:41:51 UTC. --
Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:38.875272 17877 scope.go:117] "RemoveContainer" containerID="f67c8f5ba20d6e3d3737a5f718559c84a0d3a952befbebadf13922595920fc5f"
Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:38.875461 17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-klq6x_gadget(24ca8ebe-3d3b-48a7-b44f-4765147ec0d3)\"" pod="gadget/gadget-klq6x" podUID="24ca8ebe-3d3b-48a7-b44f-4765147ec0d3"
Sep 25 18:41:38 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:38.877075 17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1f264474-f1fa-4dec-9aa1-7fa98df2d232"
Sep 25 18:41:46 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:46.877163 17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="38dabc49-8dc3-45c4-a315-1755541fa563"
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.511981 17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds\") pod \"38dabc49-8dc3-45c4-a315-1755541fa563\" (UID: \"38dabc49-8dc3-45c4-a315-1755541fa563\") "
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.512051 17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb58\" (UniqueName: \"kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58\") pod \"38dabc49-8dc3-45c4-a315-1755541fa563\" (UID: \"38dabc49-8dc3-45c4-a315-1755541fa563\") "
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.512141 17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "38dabc49-8dc3-45c4-a315-1755541fa563" (UID: "38dabc49-8dc3-45c4-a315-1755541fa563"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.513897 17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58" (OuterVolumeSpecName: "kube-api-access-cwb58") pod "38dabc49-8dc3-45c4-a315-1755541fa563" (UID: "38dabc49-8dc3-45c4-a315-1755541fa563"). InnerVolumeSpecName "kube-api-access-cwb58". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.613164 17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cwb58\" (UniqueName: \"kubernetes.io/projected/38dabc49-8dc3-45c4-a315-1755541fa563-kube-api-access-cwb58\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.613191 17877 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/38dabc49-8dc3-45c4-a315-1755541fa563-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:50.877407 17877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1f264474-f1fa-4dec-9aa1-7fa98df2d232"
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.915761 17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfvdr\" (UniqueName: \"kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr\") pod \"a9c82301-9560-4dd9-a31e-55bc04efd0e3\" (UID: \"a9c82301-9560-4dd9-a31e-55bc04efd0e3\") "
Sep 25 18:41:50 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:50.917577 17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr" (OuterVolumeSpecName: "kube-api-access-jfvdr") pod "a9c82301-9560-4dd9-a31e-55bc04efd0e3" (UID: "a9c82301-9560-4dd9-a31e-55bc04efd0e3"). InnerVolumeSpecName "kube-api-access-jfvdr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.016702 17877 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjql\" (UniqueName: \"kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql\") pod \"532ef9f6-818b-4628-a77d-5cb0d7ae89b4\" (UID: \"532ef9f6-818b-4628-a77d-5cb0d7ae89b4\") "
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.016848 17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jfvdr\" (UniqueName: \"kubernetes.io/projected/a9c82301-9560-4dd9-a31e-55bc04efd0e3-kube-api-access-jfvdr\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.018835 17877 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql" (OuterVolumeSpecName: "kube-api-access-5mjql") pod "532ef9f6-818b-4628-a77d-5cb0d7ae89b4" (UID: "532ef9f6-818b-4628-a77d-5cb0d7ae89b4"). InnerVolumeSpecName "kube-api-access-5mjql". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.117628 17877 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5mjql\" (UniqueName: \"kubernetes.io/projected/532ef9f6-818b-4628-a77d-5cb0d7ae89b4-kube-api-access-5mjql\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.139946 17877 scope.go:117] "RemoveContainer" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.185422 17877 scope.go:117] "RemoveContainer" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:51.186362 17877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395" containerID="eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.186411 17877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"} err="failed to get container status \"eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395\": rpc error: code = Unknown desc = Error response from daemon: No such container: eeffae39df34952f70da04ea008e7cc0cc5d57bdf672588d26b7d6a222dac395"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.186445 17877 scope.go:117] "RemoveContainer" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.205006 17877 scope.go:117] "RemoveContainer" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: E0925 18:41:51.205896 17877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06" containerID="b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
Sep 25 18:41:51 ubuntu-20-agent-2 kubelet[17877]: I0925 18:41:51.205932 17877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"} err="failed to get container status \"b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06\": rpc error: code = Unknown desc = Error response from daemon: No such container: b1e7a6c95c0b30ae89fc214dc87f24d489fbff80760ce5f6823ebea1a28e0b06"
==> storage-provisioner [52f5a7a4d3f3] <==
I0925 18:30:35.407151 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0925 18:30:35.420133 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0925 18:30:35.420176 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0925 18:30:35.434319 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0925 18:30:35.434511 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d!
I0925 18:30:35.435433 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"035cb8a6-9c68-4115-8f77-55377d6f71f5", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d became leader
I0925 18:30:35.534888 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e2a10086-b7eb-45bf-bc61-508339615f6d!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Wed, 25 Sep 2024 18:32:37 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qntx9 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-qntx9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m42s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m41s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m41s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m31s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m8s (x21 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.78s)