=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.999708ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002989932s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003305003s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083264028s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/30 10:32:46 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43761 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:21 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | 30 Sep 24 10:21 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | 30 Sep 24 10:22 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 30 Sep 24 10:23 UTC | 30 Sep 24 10:23 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/30 10:21:13
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0930 10:21:13.352720 14152 out.go:345] Setting OutFile to fd 1 ...
I0930 10:21:13.352887 14152 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:21:13.352898 14152 out.go:358] Setting ErrFile to fd 2...
I0930 10:21:13.352906 14152 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:21:13.353082 14152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
I0930 10:21:13.353641 14152 out.go:352] Setting JSON to false
I0930 10:21:13.354552 14152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":221,"bootTime":1727691452,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0930 10:21:13.354644 14152 start.go:139] virtualization: kvm guest
I0930 10:21:13.356931 14152 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0930 10:21:13.358246 14152 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
I0930 10:21:13.358277 14152 out.go:177] - MINIKUBE_LOCATION=19734
I0930 10:21:13.358283 14152 notify.go:220] Checking for updates...
I0930 10:21:13.359615 14152 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0930 10:21:13.360997 14152 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
I0930 10:21:13.362431 14152 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
I0930 10:21:13.363779 14152 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0930 10:21:13.365145 14152 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0930 10:21:13.366672 14152 driver.go:394] Setting default libvirt URI to qemu:///system
I0930 10:21:13.376268 14152 out.go:177] * Using the none driver based on user configuration
I0930 10:21:13.377509 14152 start.go:297] selected driver: none
I0930 10:21:13.377525 14152 start.go:901] validating driver "none" against <nil>
I0930 10:21:13.377539 14152 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0930 10:21:13.377573 14152 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0930 10:21:13.378007 14152 out.go:270] ! The 'none' driver does not respect the --memory flag
I0930 10:21:13.378890 14152 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0930 10:21:13.379263 14152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0930 10:21:13.379318 14152 cni.go:84] Creating CNI manager for ""
I0930 10:21:13.379382 14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:21:13.379396 14152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0930 10:21:13.379473 14152 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0930 10:21:13.381537 14152 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0930 10:21:13.383470 14152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json ...
I0930 10:21:13.383504 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json: {Name:mk1b7757fcffe1c2ef054e98e7fbd4d6b65c08e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:13.383654 14152 start.go:360] acquireMachinesLock for minikube: {Name:mk950621b2cf18d4d46c3c8617fe9495b86929a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0930 10:21:13.383695 14152 start.go:364] duration metric: took 24.204µs to acquireMachinesLock for "minikube"
I0930 10:21:13.383714 14152 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0930 10:21:13.383816 14152 start.go:125] createHost starting for "" (driver="none")
I0930 10:21:13.385389 14152 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0930 10:21:13.386684 14152 exec_runner.go:51] Run: systemctl --version
I0930 10:21:13.389306 14152 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0930 10:21:13.389348 14152 client.go:168] LocalClient.Create starting
I0930 10:21:13.389409 14152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca.pem
I0930 10:21:13.389441 14152 main.go:141] libmachine: Decoding PEM data...
I0930 10:21:13.389460 14152 main.go:141] libmachine: Parsing certificate...
I0930 10:21:13.389509 14152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3681/.minikube/certs/cert.pem
I0930 10:21:13.389536 14152 main.go:141] libmachine: Decoding PEM data...
I0930 10:21:13.389552 14152 main.go:141] libmachine: Parsing certificate...
I0930 10:21:13.389994 14152 client.go:171] duration metric: took 636.505µs to LocalClient.Create
I0930 10:21:13.390025 14152 start.go:167] duration metric: took 722.263µs to libmachine.API.Create "minikube"
I0930 10:21:13.390034 14152 start.go:293] postStartSetup for "minikube" (driver="none")
I0930 10:21:13.390084 14152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0930 10:21:13.390133 14152 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0930 10:21:13.398084 14152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0930 10:21:13.398111 14152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0930 10:21:13.398124 14152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0930 10:21:13.400360 14152 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0930 10:21:13.401616 14152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3681/.minikube/addons for local assets ...
I0930 10:21:13.401669 14152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3681/.minikube/files for local assets ...
I0930 10:21:13.401700 14152 start.go:296] duration metric: took 11.656311ms for postStartSetup
I0930 10:21:13.402472 14152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json ...
I0930 10:21:13.402635 14152 start.go:128] duration metric: took 18.808783ms to createHost
I0930 10:21:13.402651 14152 start.go:83] releasing machines lock for "minikube", held for 18.942587ms
I0930 10:21:13.403127 14152 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0930 10:21:13.403182 14152 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0930 10:21:13.406211 14152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0930 10:21:13.406260 14152 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0930 10:21:13.416344 14152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0930 10:21:13.416369 14152 start.go:495] detecting cgroup driver to use...
I0930 10:21:13.416399 14152 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0930 10:21:13.416527 14152 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0930 10:21:13.434414 14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0930 10:21:13.443091 14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0930 10:21:13.451716 14152 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0930 10:21:13.451761 14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0930 10:21:13.461639 14152 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0930 10:21:13.471151 14152 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0930 10:21:13.479700 14152 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0930 10:21:13.491098 14152 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0930 10:21:13.499759 14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0930 10:21:13.509132 14152 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0930 10:21:13.517257 14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0930 10:21:13.526366 14152 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0930 10:21:13.533302 14152 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0930 10:21:13.540218 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:13.753304 14152 exec_runner.go:51] Run: sudo systemctl restart containerd
I0930 10:21:13.819458 14152 start.go:495] detecting cgroup driver to use...
I0930 10:21:13.819510 14152 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0930 10:21:13.819658 14152 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0930 10:21:13.839719 14152 exec_runner.go:51] Run: which cri-dockerd
I0930 10:21:13.840599 14152 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0930 10:21:13.848177 14152 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0930 10:21:13.848195 14152 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0930 10:21:13.848223 14152 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0930 10:21:13.855092 14152 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0930 10:21:13.855236 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube989885707 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0930 10:21:13.862442 14152 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0930 10:21:14.099473 14152 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0930 10:21:14.322661 14152 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0930 10:21:14.322789 14152 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0930 10:21:14.322804 14152 exec_runner.go:203] rm: /etc/docker/daemon.json
I0930 10:21:14.322862 14152 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0930 10:21:14.330805 14152 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0930 10:21:14.330950 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube628740387 /etc/docker/daemon.json
I0930 10:21:14.338617 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:14.593243 14152 exec_runner.go:51] Run: sudo systemctl restart docker
I0930 10:21:14.885437 14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0930 10:21:14.896496 14152 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0930 10:21:14.911878 14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0930 10:21:14.922592 14152 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0930 10:21:15.148348 14152 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0930 10:21:15.408148 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:15.646960 14152 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0930 10:21:15.661154 14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0930 10:21:15.671865 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:15.915010 14152 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0930 10:21:15.982544 14152 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0930 10:21:15.982626 14152 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0930 10:21:15.984123 14152 start.go:563] Will wait 60s for crictl version
I0930 10:21:15.984176 14152 exec_runner.go:51] Run: which crictl
I0930 10:21:15.985190 14152 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0930 10:21:16.013562 14152 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0930 10:21:16.013621 14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0930 10:21:16.032568 14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0930 10:21:16.056632 14152 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0930 10:21:16.056707 14152 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0930 10:21:16.059279 14152 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0930 10:21:16.060514 14152 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0930 10:21:16.060638 14152 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:21:16.060651 14152 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
I0930 10:21:16.060738 14152 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0930 10:21:16.060795 14152 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0930 10:21:16.107177 14152 cni.go:84] Creating CNI manager for ""
I0930 10:21:16.107199 14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:21:16.107208 14152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0930 10:21:16.107226 14152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0930 10:21:16.107368 14152 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.138.0.48
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-2"
kubeletExtraArgs:
node-ip: 10.138.0.48
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0930 10:21:16.107425 14152 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0930 10:21:16.115833 14152 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0930 10:21:16.115882 14152 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0930 10:21:16.123361 14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0930 10:21:16.123362 14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0930 10:21:16.123361 14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0930 10:21:16.123416 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0930 10:21:16.123415 14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0930 10:21:16.123474 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0930 10:21:16.133968 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0930 10:21:16.174854 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube946911628 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0930 10:21:16.185818 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2871785167 /var/lib/minikube/binaries/v1.31.1/kubectl
I0930 10:21:16.195965 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1757915952 /var/lib/minikube/binaries/v1.31.1/kubelet
I0930 10:21:16.260992 14152 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0930 10:21:16.269241 14152 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0930 10:21:16.269260 14152 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0930 10:21:16.269295 14152 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0930 10:21:16.276954 14152 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
I0930 10:21:16.277084 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3370089736 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0930 10:21:16.284735 14152 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0930 10:21:16.284752 14152 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0930 10:21:16.284793 14152 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0930 10:21:16.292650 14152 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0930 10:21:16.292782 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube932379393 /lib/systemd/system/kubelet.service
I0930 10:21:16.300150 14152 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
I0930 10:21:16.300255 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1779135712 /var/tmp/minikube/kubeadm.yaml.new
I0930 10:21:16.307512 14152 exec_runner.go:51] Run: grep 10.138.0.48 control-plane.minikube.internal$ /etc/hosts
I0930 10:21:16.308664 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:16.529511 14152 exec_runner.go:51] Run: sudo systemctl start kubelet
I0930 10:21:16.543855 14152 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube for IP: 10.138.0.48
I0930 10:21:16.543880 14152 certs.go:194] generating shared ca certs ...
I0930 10:21:16.543896 14152 certs.go:226] acquiring lock for ca certs: {Name:mk0a5b9b1d30d3d8af9c11762592cf8e7817e041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:16.544032 14152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.key
I0930 10:21:16.544097 14152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.key
I0930 10:21:16.544110 14152 certs.go:256] generating profile certs ...
I0930 10:21:16.544172 14152 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key
I0930 10:21:16.544199 14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt with IP's: []
I0930 10:21:16.617884 14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt ...
I0930 10:21:16.617910 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt: {Name:mk0a31888a10e1b9b9d480a4e5d1e7e81c2faefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:16.618049 14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key ...
I0930 10:21:16.618063 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key: {Name:mk97492241b85cb3608b33f4ce925f417ccad8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:16.618151 14152 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a
I0930 10:21:16.618165 14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
I0930 10:21:16.858398 14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
I0930 10:21:16.858426 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk0da2466d77ccdd4ee35c6d92f955e6dd15b091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:16.858562 14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a ...
I0930 10:21:16.858574 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf4de6a10a92347f23430cdad86300f89676d1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:16.858642 14152 certs.go:381] copying /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt
I0930 10:21:16.858738 14152 certs.go:385] copying /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key
I0930 10:21:16.858807 14152 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key
I0930 10:21:16.858827 14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0930 10:21:17.086340 14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt ...
I0930 10:21:17.086369 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt: {Name:mk63d5b0cba483badba29f192c4be82a61b1805f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:17.086502 14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key ...
I0930 10:21:17.086516 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key: {Name:mk937e2ef95b93b80300b9c639fcd810c9496f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:17.086707 14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca-key.pem (1675 bytes)
I0930 10:21:17.086748 14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca.pem (1082 bytes)
I0930 10:21:17.086783 14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/cert.pem (1123 bytes)
I0930 10:21:17.086863 14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/key.pem (1675 bytes)
I0930 10:21:17.087634 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0930 10:21:17.087770 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3483544710 /var/lib/minikube/certs/ca.crt
I0930 10:21:17.097628 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0930 10:21:17.097729 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3692581465 /var/lib/minikube/certs/ca.key
I0930 10:21:17.105978 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0930 10:21:17.106078 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2196338853 /var/lib/minikube/certs/proxy-client-ca.crt
I0930 10:21:17.114093 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0930 10:21:17.114214 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3302222190 /var/lib/minikube/certs/proxy-client-ca.key
I0930 10:21:17.121862 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0930 10:21:17.121987 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2346850202 /var/lib/minikube/certs/apiserver.crt
I0930 10:21:17.130369 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0930 10:21:17.130511 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3331841777 /var/lib/minikube/certs/apiserver.key
I0930 10:21:17.138654 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0930 10:21:17.138791 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2219864244 /var/lib/minikube/certs/proxy-client.crt
I0930 10:21:17.146608 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0930 10:21:17.146747 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393364853 /var/lib/minikube/certs/proxy-client.key
I0930 10:21:17.154543 14152 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0930 10:21:17.154571 14152 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.154614 14152 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.161954 14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0930 10:21:17.162100 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1696048025 /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.170641 14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0930 10:21:17.170760 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2485428456 /var/lib/minikube/kubeconfig
I0930 10:21:17.178418 14152 exec_runner.go:51] Run: openssl version
I0930 10:21:17.181107 14152 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0930 10:21:17.189685 14152 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.190975 14152 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.191017 14152 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0930 10:21:17.193720 14152 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0930 10:21:17.201377 14152 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0930 10:21:17.202374 14152 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0930 10:21:17.202411 14152 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0930 10:21:17.202514 14152 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0930 10:21:17.217374 14152 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0930 10:21:17.225664 14152 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0930 10:21:17.233622 14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0930 10:21:17.254075 14152 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0930 10:21:17.262418 14152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0930 10:21:17.262442 14152 kubeadm.go:157] found existing configuration files:
I0930 10:21:17.262491 14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0930 10:21:17.270222 14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0930 10:21:17.270267 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0930 10:21:17.277417 14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0930 10:21:17.287232 14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0930 10:21:17.287299 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0930 10:21:17.294522 14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0930 10:21:17.302445 14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0930 10:21:17.302497 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0930 10:21:17.309773 14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0930 10:21:17.317360 14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0930 10:21:17.317415 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0930 10:21:17.325814 14152 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0930 10:21:17.359027 14152 kubeadm.go:310] W0930 10:21:17.358912 15471 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0930 10:21:17.359583 14152 kubeadm.go:310] W0930 10:21:17.359477 15471 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0930 10:21:17.361136 14152 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0930 10:21:17.361157 14152 kubeadm.go:310] [preflight] Running pre-flight checks
I0930 10:21:17.462693 14152 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0930 10:21:17.462804 14152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0930 10:21:17.462814 14152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0930 10:21:17.462821 14152 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0930 10:21:17.473661 14152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0930 10:21:17.476450 14152 out.go:235] - Generating certificates and keys ...
I0930 10:21:17.476501 14152 kubeadm.go:310] [certs] Using existing ca certificate authority
I0930 10:21:17.476515 14152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0930 10:21:17.763541 14152 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0930 10:21:17.876946 14152 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0930 10:21:17.961339 14152 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0930 10:21:18.036728 14152 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0930 10:21:18.260305 14152 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0930 10:21:18.260442 14152 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0930 10:21:18.395669 14152 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0930 10:21:18.395800 14152 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
I0930 10:21:18.522939 14152 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0930 10:21:18.640886 14152 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0930 10:21:18.747681 14152 kubeadm.go:310] [certs] Generating "sa" key and public key
I0930 10:21:18.747815 14152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0930 10:21:18.857464 14152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0930 10:21:18.987130 14152 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0930 10:21:19.299937 14152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0930 10:21:19.654004 14152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0930 10:21:19.894999 14152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0930 10:21:19.895541 14152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0930 10:21:19.897739 14152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0930 10:21:19.899745 14152 out.go:235] - Booting up control plane ...
I0930 10:21:19.899765 14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0930 10:21:19.899778 14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0930 10:21:19.900209 14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0930 10:21:19.920460 14152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0930 10:21:19.924585 14152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0930 10:21:19.924608 14152 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0930 10:21:20.158781 14152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0930 10:21:20.158802 14152 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0930 10:21:21.160320 14152 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001526429s
I0930 10:21:21.160345 14152 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0930 10:21:25.162141 14152 kubeadm.go:310] [api-check] The API server is healthy after 4.001788604s
I0930 10:21:25.173260 14152 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0930 10:21:25.182442 14152 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0930 10:21:25.198799 14152 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0930 10:21:25.198821 14152 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0930 10:21:25.205224 14152 kubeadm.go:310] [bootstrap-token] Using token: 8mbsnh.bfaqwwlbiiiw0kp5
I0930 10:21:25.206512 14152 out.go:235] - Configuring RBAC rules ...
I0930 10:21:25.206542 14152 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0930 10:21:25.209309 14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0930 10:21:25.214862 14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0930 10:21:25.217020 14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0930 10:21:25.219223 14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0930 10:21:25.221368 14152 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0930 10:21:25.569040 14152 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0930 10:21:25.993580 14152 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0930 10:21:26.568312 14152 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0930 10:21:26.570053 14152 kubeadm.go:310]
I0930 10:21:26.570071 14152 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0930 10:21:26.570076 14152 kubeadm.go:310]
I0930 10:21:26.570080 14152 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0930 10:21:26.570084 14152 kubeadm.go:310]
I0930 10:21:26.570088 14152 kubeadm.go:310] mkdir -p $HOME/.kube
I0930 10:21:26.570098 14152 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0930 10:21:26.570103 14152 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0930 10:21:26.570106 14152 kubeadm.go:310]
I0930 10:21:26.570110 14152 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0930 10:21:26.570114 14152 kubeadm.go:310]
I0930 10:21:26.570119 14152 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0930 10:21:26.570122 14152 kubeadm.go:310]
I0930 10:21:26.570126 14152 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0930 10:21:26.570130 14152 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0930 10:21:26.570134 14152 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0930 10:21:26.570138 14152 kubeadm.go:310]
I0930 10:21:26.570143 14152 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0930 10:21:26.570147 14152 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0930 10:21:26.570151 14152 kubeadm.go:310]
I0930 10:21:26.570155 14152 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8mbsnh.bfaqwwlbiiiw0kp5 \
I0930 10:21:26.570158 14152 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:63de383190f50d46ec6dfa9942e832c15098866b42f0ccbc88cee83ba5922779 \
I0930 10:21:26.570161 14152 kubeadm.go:310] --control-plane
I0930 10:21:26.570164 14152 kubeadm.go:310]
I0930 10:21:26.570167 14152 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0930 10:21:26.570176 14152 kubeadm.go:310]
I0930 10:21:26.570178 14152 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8mbsnh.bfaqwwlbiiiw0kp5 \
I0930 10:21:26.570181 14152 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:63de383190f50d46ec6dfa9942e832c15098866b42f0ccbc88cee83ba5922779
I0930 10:21:26.572834 14152 cni.go:84] Creating CNI manager for ""
I0930 10:21:26.572859 14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0930 10:21:26.574452 14152 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0930 10:21:26.575769 14152 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0930 10:21:26.585914 14152 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0930 10:21:26.586066 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube806266157 /etc/cni/net.d/1-k8s.conflist
I0930 10:21:26.595103 14152 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0930 10:21:26.595166 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:26.595189 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0930 10:21:26.604853 14152 ops.go:34] apiserver oom_adj: -16
I0930 10:21:26.661637 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:27.162723 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:27.662443 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:28.161865 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:28.662469 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:29.162637 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:29.662427 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:30.162116 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:30.662373 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:31.162398 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:31.662337 14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0930 10:21:31.724593 14152 kubeadm.go:1113] duration metric: took 5.129472581s to wait for elevateKubeSystemPrivileges
I0930 10:21:31.724627 14152 kubeadm.go:394] duration metric: took 14.522219874s to StartCluster
I0930 10:21:31.724658 14152 settings.go:142] acquiring lock: {Name:mkba5c1698050cdfa071486ada1fbbed08e1f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:31.724730 14152 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19734-3681/kubeconfig
I0930 10:21:31.725503 14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/kubeconfig: {Name:mka1d3ed23933c1059435012f9bcdee38f5f1e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0930 10:21:31.725755 14152 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0930 10:21:31.725838 14152 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0930 10:21:31.725954 14152 addons.go:69] Setting yakd=true in profile "minikube"
I0930 10:21:31.725954 14152 addons.go:69] Setting metrics-server=true in profile "minikube"
I0930 10:21:31.725971 14152 addons.go:234] Setting addon yakd=true in "minikube"
I0930 10:21:31.725982 14152 addons.go:234] Setting addon metrics-server=true in "minikube"
I0930 10:21:31.726001 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.726018 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.726057 14152 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:21:31.726095 14152 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0930 10:21:31.726107 14152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0930 10:21:31.726526 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.726530 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.726538 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.726546 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.726566 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.726579 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.726798 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.726812 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.726840 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.727088 14152 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0930 10:21:31.727110 14152 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0930 10:21:31.727184 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.727389 14152 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0930 10:21:31.727419 14152 mustload.go:65] Loading cluster: minikube
I0930 10:21:31.727657 14152 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:21:31.727803 14152 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0930 10:21:31.727820 14152 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0930 10:21:31.728285 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.728290 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.728297 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.728302 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.728323 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.728337 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.728455 14152 addons.go:69] Setting volcano=true in profile "minikube"
I0930 10:21:31.728459 14152 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0930 10:21:31.728469 14152 addons.go:234] Setting addon volcano=true in "minikube"
I0930 10:21:31.728475 14152 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0930 10:21:31.728493 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.728499 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.729107 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.729120 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.729149 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.729172 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.729186 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.729218 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.729826 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.729842 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.729872 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.730353 14152 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0930 10:21:31.730520 14152 out.go:177] * Configuring local host environment ...
I0930 10:21:31.730533 14152 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0930 10:21:31.730578 14152 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0930 10:21:31.730612 14152 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0930 10:21:31.730637 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.730708 14152 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0930 10:21:31.730813 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.731005 14152 addons.go:69] Setting registry=true in profile "minikube"
I0930 10:21:31.731029 14152 addons.go:234] Setting addon registry=true in "minikube"
I0930 10:21:31.731057 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.731098 14152 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0930 10:21:31.731121 14152 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0930 10:21:31.731148 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.731300 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.731319 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.731348 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.731694 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.731713 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.731731 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.731740 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.731745 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.731775 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.730545 14152 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0930 10:21:31.731850 14152 host.go:66] Checking if "minikube" exists ...
W0930 10:21:31.732026 14152 out.go:270] *
W0930 10:21:31.732043 14152 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
I0930 10:21:31.732048 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.732061 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.732092 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0930 10:21:31.732051 14152 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0930 10:21:31.737306 14152 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0930 10:21:31.737338 14152 out.go:270] *
W0930 10:21:31.737389 14152 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0930 10:21:31.737405 14152 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0930 10:21:31.737420 14152 out.go:270] *
W0930 10:21:31.737445 14152 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0930 10:21:31.737699 14152 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0930 10:21:31.737728 14152 out.go:270] *
W0930 10:21:31.737754 14152 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0930 10:21:31.737800 14152 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0930 10:21:31.741393 14152 out.go:177] * Verifying Kubernetes components...
I0930 10:21:31.742687 14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0930 10:21:31.747444 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.747444 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.752130 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.752156 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.752189 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.752502 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.754451 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.757873 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.759687 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.759887 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.764456 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.768966 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.769018 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.769238 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.769285 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.771960 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.772021 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.776563 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.776605 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.776808 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.776861 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.778000 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.778961 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.778996 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.781185 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.781230 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.783576 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.785107 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.785125 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.787568 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.787774 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.787792 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.796468 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.796481 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.796488 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.796498 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.796521 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.796538 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.796476 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.796772 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.797605 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.797646 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.798687 14152 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0930 10:21:31.801973 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.802016 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.802647 14152 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0930 10:21:31.802710 14152 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0930 10:21:31.802911 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1355357603 /etc/kubernetes/addons/yakd-ns.yaml
I0930 10:21:31.803409 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.803503 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.804801 14152 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0930 10:21:31.804848 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.805077 14152 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0930 10:21:31.805489 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.805497 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.805508 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.805537 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.806328 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.806346 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.806378 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.806449 14152 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0930 10:21:31.806471 14152 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0930 10:21:31.806602 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3013348751 /etc/kubernetes/addons/ig-namespace.yaml
I0930 10:21:31.806823 14152 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0930 10:21:31.806857 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.807607 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:31.807626 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:31.807662 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:31.807826 14152 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0930 10:21:31.809266 14152 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0930 10:21:31.809302 14152 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0930 10:21:31.809424 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4238186474 /etc/kubernetes/addons/metrics-apiservice.yaml
I0930 10:21:31.814737 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.814760 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.814878 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.814899 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.814922 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.816374 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.816426 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.824974 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.824998 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.826063 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.826469 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.827464 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.827714 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.827730 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.827988 14152 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0930 10:21:31.828032 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0930 10:21:31.829307 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.829326 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.829736 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.829757 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:31.833125 14152 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0930 10:21:31.833251 14152 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0930 10:21:31.833344 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.833429 14152 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0930 10:21:31.833546 14152 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0930 10:21:31.833932 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.834156 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.834246 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.834262 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2296623654 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0930 10:21:31.836538 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.836972 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.837441 14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0930 10:21:31.837471 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0930 10:21:31.837591 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1892949902 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0930 10:21:31.838979 14152 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
I0930 10:21:31.839012 14152 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0930 10:21:31.840073 14152 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0930 10:21:31.840155 14152 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0930 10:21:31.842453 14152 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0930 10:21:31.842592 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0930 10:21:31.843418 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2409048212 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0930 10:21:31.843131 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.845268 14152 out.go:177] - Using image docker.io/registry:2.8.3
I0930 10:21:31.847170 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0930 10:21:31.847190 14152 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0930 10:21:31.848202 14152 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0930 10:21:31.848228 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I0930 10:21:31.848243 14152 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0930 10:21:31.848267 14152 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0930 10:21:31.848379 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3311592831 /etc/kubernetes/addons/yakd-sa.yaml
I0930 10:21:31.848702 14152 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0930 10:21:31.848723 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0930 10:21:31.848834 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2188874422 /etc/kubernetes/addons/registry-rc.yaml
I0930 10:21:31.849186 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3013926792 /etc/kubernetes/addons/volcano-deployment.yaml
I0930 10:21:31.853855 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0930 10:21:31.853944 14152 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0930 10:21:31.854143 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1635928611 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0930 10:21:31.854262 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0930 10:21:31.855956 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0930 10:21:31.864385 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0930 10:21:31.865054 14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0930 10:21:31.865086 14152 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0930 10:21:31.865211 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1550161587 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0930 10:21:31.868029 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.868080 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.883492 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:31.883862 14152 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0930 10:21:31.883929 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.883972 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.884280 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0930 10:21:31.885289 14152 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0930 10:21:31.887211 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.887238 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.887691 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0930 10:21:31.887789 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0930 10:21:31.887918 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube608355028 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0930 10:21:31.894421 14152 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0930 10:21:31.894451 14152 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0930 10:21:31.894563 14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0930 10:21:31.894564 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.894584 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.894589 14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0930 10:21:31.894704 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3571105200 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0930 10:21:31.898735 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.899990 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.901138 14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0930 10:21:31.901160 14152 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0930 10:21:31.901257 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3982265590 /etc/kubernetes/addons/metrics-server-service.yaml
I0930 10:21:31.901649 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.901669 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.902128 14152 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0930 10:21:31.903263 14152 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0930 10:21:31.904147 14152 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:31.904277 14152 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0930 10:21:31.904287 14152 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:31.904716 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:31.906514 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3010322314 /etc/kubernetes/addons/ig-role.yaml
I0930 10:21:31.906965 14152 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0930 10:21:31.917417 14152 out.go:177] - Using image docker.io/busybox:stable
I0930 10:21:31.917568 14152 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0930 10:21:31.918020 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:31.918085 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:31.918149 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.918159 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0930 10:21:31.918410 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1820589435 /etc/kubernetes/addons/registry-svc.yaml
I0930 10:21:31.918722 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0930 10:21:31.918744 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0930 10:21:31.918859 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4199368668 /etc/kubernetes/addons/rbac-hostpath.yaml
I0930 10:21:31.919086 14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0930 10:21:31.919105 14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0930 10:21:31.919218 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube182016945 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0930 10:21:31.927542 14152 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0930 10:21:31.927567 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0930 10:21:31.927678 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1597802008 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0930 10:21:31.927923 14152 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0930 10:21:31.928532 14152 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0930 10:21:31.928570 14152 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0930 10:21:31.928691 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1486687048 /etc/kubernetes/addons/yakd-crb.yaml
I0930 10:21:31.930627 14152 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0930 10:21:31.930652 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0930 10:21:31.930779 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:31.930798 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:31.931927 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube424137532 /etc/kubernetes/addons/deployment.yaml
I0930 10:21:31.934621 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0930 10:21:31.934639 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0930 10:21:31.934724 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3439996899 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0930 10:21:31.934763 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0930 10:21:31.934855 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:31.934895 14152 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:31.934908 14152 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0930 10:21:31.934914 14152 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:31.934950 14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:31.942294 14152 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0930 10:21:31.942319 14152 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0930 10:21:31.942435 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1497836714 /etc/kubernetes/addons/ig-rolebinding.yaml
I0930 10:21:31.948379 14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0930 10:21:31.948408 14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0930 10:21:31.948555 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817326998 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0930 10:21:31.956403 14152 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0930 10:21:31.956577 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1568443454 /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:31.963670 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0930 10:21:31.964288 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0930 10:21:31.967375 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0930 10:21:31.968471 14152 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0930 10:21:31.968491 14152 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0930 10:21:31.968608 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918197529 /etc/kubernetes/addons/yakd-svc.yaml
I0930 10:21:31.968640 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1444247075 /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:31.970965 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0930 10:21:31.970995 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0930 10:21:31.971123 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520277398 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0930 10:21:31.974343 14152 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0930 10:21:31.974375 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0930 10:21:31.974497 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3697893580 /etc/kubernetes/addons/registry-proxy.yaml
I0930 10:21:31.996891 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0930 10:21:32.000241 14152 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0930 10:21:32.000278 14152 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0930 10:21:32.000411 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2817648918 /etc/kubernetes/addons/ig-clusterrole.yaml
I0930 10:21:32.002142 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0930 10:21:32.003458 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0930 10:21:32.003483 14152 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0930 10:21:32.003601 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube285494986 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0930 10:21:32.009297 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0930 10:21:32.009322 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0930 10:21:32.009437 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3030886932 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0930 10:21:32.012899 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0930 10:21:32.018575 14152 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0930 10:21:32.018728 14152 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0930 10:21:32.018884 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2090965801 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0930 10:21:32.027653 14152 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:32.027680 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0930 10:21:32.027809 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2650349168 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:32.033545 14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0930 10:21:32.033573 14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0930 10:21:32.033779 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2521279534 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0930 10:21:32.040994 14152 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0930 10:21:32.041022 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0930 10:21:32.041148 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube261155437 /etc/kubernetes/addons/yakd-dp.yaml
I0930 10:21:32.079130 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:32.114801 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0930 10:21:32.130444 14152 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
I0930 10:21:32.130499 14152 exec_runner.go:151] cp: inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
I0930 10:21:32.130658 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube216802978 /etc/kubernetes/addons/ig-configmap.yaml
I0930 10:21:32.166060 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0930 10:21:32.166102 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0930 10:21:32.166249 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1364075100 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0930 10:21:32.173314 14152 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0930 10:21:32.173348 14152 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0930 10:21:32.173459 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4035067505 /etc/kubernetes/addons/ig-crd.yaml
I0930 10:21:32.179427 14152 exec_runner.go:51] Run: sudo systemctl start kubelet
I0930 10:21:32.235605 14152 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0930 10:21:32.235649 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
I0930 10:21:32.235806 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1048091852 /etc/kubernetes/addons/ig-daemonset.yaml
I0930 10:21:32.236029 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0930 10:21:32.236061 14152 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0930 10:21:32.236172 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281754744 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0930 10:21:32.276753 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0930 10:21:32.276786 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0930 10:21:32.276932 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2819992982 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0930 10:21:32.296817 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0930 10:21:32.347207 14152 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
I0930 10:21:32.353123 14152 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
I0930 10:21:32.353146 14152 node_ready.go:38] duration metric: took 5.901148ms for node "ubuntu-20-agent-2" to be "Ready" ...
I0930 10:21:32.353158 14152 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0930 10:21:32.363747 14152 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:32.462401 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0930 10:21:32.462445 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0930 10:21:32.463446 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube28210301 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0930 10:21:32.536136 14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0930 10:21:32.536174 14152 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0930 10:21:32.536310 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube344804646 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0930 10:21:32.568684 14152 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0930 10:21:32.629189 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0930 10:21:32.898478 14152 addons.go:475] Verifying addon registry=true in "minikube"
I0930 10:21:32.905179 14152 out.go:177] * Verifying registry addon...
I0930 10:21:32.910810 14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0930 10:21:32.922720 14152 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0930 10:21:32.922766 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:32.979526 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.015196962s)
I0930 10:21:33.074951 14152 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0930 10:21:33.135180 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.133007667s)
I0930 10:21:33.178877 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.215170848s)
I0930 10:21:33.407442 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.292587619s)
I0930 10:21:33.411695 14152 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0930 10:21:33.415123 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:33.440046 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.505240158s)
I0930 10:21:33.440078 14152 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0930 10:21:33.568298 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.271415157s)
I0930 10:21:33.875711 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.796525324s)
W0930 10:21:33.875814 14152 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0930 10:21:33.875851 14152 retry.go:31] will retry after 343.894474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0930 10:21:33.914583 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:34.220092 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0930 10:21:34.371081 14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:34.416874 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:34.916026 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:35.073669 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.155474067s)
I0930 10:21:35.378160 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.748889353s)
I0930 10:21:35.378194 14152 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0930 10:21:35.380330 14152 out.go:177] * Verifying csi-hostpath-driver addon...
I0930 10:21:35.382823 14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:21:35.387406 14152 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:21:35.387435 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:35.417316 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:35.886861 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:35.914534 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:36.387853 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:36.415044 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:36.869759 14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:36.888279 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:36.914215 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:37.165179 14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.945029873s)
I0930 10:21:37.386847 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:37.414757 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:37.886967 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:37.914372 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:38.387715 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:38.414613 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:38.893336 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:38.893987 14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0930 10:21:38.894120 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2171123940 /var/lib/minikube/google_application_credentials.json
I0930 10:21:38.894140 14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
I0930 10:21:38.905222 14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0930 10:21:38.905346 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3307592155 /var/lib/minikube/google_cloud_project
I0930 10:21:38.914567 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:38.915425 14152 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0930 10:21:38.915470 14152 host.go:66] Checking if "minikube" exists ...
I0930 10:21:38.916118 14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
I0930 10:21:38.916143 14152 api_server.go:166] Checking apiserver status ...
I0930 10:21:38.916177 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:38.931589 14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
I0930 10:21:38.941360 14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
I0930 10:21:38.941416 14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
I0930 10:21:38.951157 14152 api_server.go:204] freezer state: "THAWED"
I0930 10:21:38.951192 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:38.954534 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:38.954600 14152 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0930 10:21:39.010245 14152 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0930 10:21:39.030624 14152 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0930 10:21:39.068581 14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0930 10:21:39.068636 14152 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0930 10:21:39.068791 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1575989071 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0930 10:21:39.079426 14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0930 10:21:39.079459 14152 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0930 10:21:39.110429 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671691026 /etc/kubernetes/addons/gcp-auth-service.yaml
I0930 10:21:39.122163 14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0930 10:21:39.122197 14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0930 10:21:39.122332 14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube287679987 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0930 10:21:39.154899 14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0930 10:21:39.387572 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:39.415256 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:39.549098 14152 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0930 10:21:39.550614 14152 out.go:177] * Verifying gcp-auth addon...
I0930 10:21:39.552962 14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0930 10:21:39.555080 14152 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0930 10:21:39.869421 14152 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:39.869449 14152 pod_ready.go:82] duration metric: took 7.505623079s for pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:39.869478 14152 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace to be "Ready" ...
I0930 10:21:39.874964 14152 pod_ready.go:98] pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:32 +0000 UTC,FinishedAt:2024-09-30 10:21:38 +0000 UTC,ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4 Started:0xc002814020 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025adde0} {Name:kube-api-access-wcn9x MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0025addf0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0930 10:21:39.874997 14152 pod_ready.go:82] duration metric: took 5.508029ms for pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace to be "Ready" ...
E0930 10:21:39.875011 14152 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:32 +0000 UTC,FinishedAt:2024-09-30 10:21:38 +0000 UTC,ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4 Started:0xc002814020 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025adde0} {Name:kube-api-access-wcn9x MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0025addf0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0930 10:21:39.875024 14152 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:39.889049 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:39.914003 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:40.386749 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:40.414767 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:40.886561 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:40.914683 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:41.386623 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:41.414927 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:41.880106 14152 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.880126 14152 pod_ready.go:82] duration metric: took 2.00509396s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.880135 14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.884221 14152 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.884245 14152 pod_ready.go:82] duration metric: took 4.103459ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.884259 14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.886349 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:41.888316 14152 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.888332 14152 pod_ready.go:82] duration metric: took 4.066357ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.888340 14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zcvv" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.892446 14152 pod_ready.go:93] pod "kube-proxy-6zcvv" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:41.892468 14152 pod_ready.go:82] duration metric: took 4.11999ms for pod "kube-proxy-6zcvv" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.892479 14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:41.914231 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:42.268095 14152 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:42.268119 14152 pod_ready.go:82] duration metric: took 375.632443ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
I0930 10:21:42.268129 14152 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace to be "Ready" ...
I0930 10:21:42.388827 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:42.414683 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:42.666957 14152 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace has status "Ready":"True"
I0930 10:21:42.666979 14152 pod_ready.go:82] duration metric: took 398.843778ms for pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace to be "Ready" ...
I0930 10:21:42.666987 14152 pod_ready.go:39] duration metric: took 10.313817404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0930 10:21:42.667002 14152 api_server.go:52] waiting for apiserver process to appear ...
I0930 10:21:42.667050 14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0930 10:21:42.683903 14152 api_server.go:72] duration metric: took 10.944751852s to wait for apiserver process to appear ...
I0930 10:21:42.683923 14152 api_server.go:88] waiting for apiserver healthz status ...
I0930 10:21:42.683944 14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
I0930 10:21:42.687306 14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
ok
I0930 10:21:42.688067 14152 api_server.go:141] control plane version: v1.31.1
I0930 10:21:42.688090 14152 api_server.go:131] duration metric: took 4.159551ms to wait for apiserver health ...
I0930 10:21:42.688100 14152 system_pods.go:43] waiting for kube-system pods to appear ...
I0930 10:21:42.872150 14152 system_pods.go:59] 16 kube-system pods found
I0930 10:21:42.872181 14152 system_pods.go:61] "coredns-7c65d6cfc9-l6kz2" [8c9f80b9-eea9-44a8-815c-69b4dcceecf9] Running
I0930 10:21:42.872199 14152 system_pods.go:61] "csi-hostpath-attacher-0" [8baace2a-d4f6-46fc-906b-a4fb78e8e517] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0930 10:21:42.872208 14152 system_pods.go:61] "csi-hostpath-resizer-0" [affbdd42-4430-4ff1-a978-941e66701b22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0930 10:21:42.872219 14152 system_pods.go:61] "csi-hostpathplugin-6dwlc" [ff93f1cc-212a-41e5-be3e-db0842b636c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0930 10:21:42.872225 14152 system_pods.go:61] "etcd-ubuntu-20-agent-2" [60eb3919-ff73-4033-b678-1f1dc0a96b49] Running
I0930 10:21:42.872231 14152 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [000c5b05-949f-4593-88f6-17f1d9d1342a] Running
I0930 10:21:42.872240 14152 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [7a0e0e79-569c-4842-8d03-c2f0d5aa842c] Running
I0930 10:21:42.872246 14152 system_pods.go:61] "kube-proxy-6zcvv" [fde2c7b5-3e42-48a5-9b2a-670fc6e8e59f] Running
I0930 10:21:42.872254 14152 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [0f7cf227-109f-4118-8f0d-13b56957b763] Running
I0930 10:21:42.872262 14152 system_pods.go:61] "metrics-server-84c5f94fbc-k6tlb" [9dab8e12-be75-43d4-b706-334dbdb7b9c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0930 10:21:42.872271 14152 system_pods.go:61] "nvidia-device-plugin-daemonset-6496t" [5671e02b-bae3-433f-98c5-56b427f3e666] Running
I0930 10:21:42.872277 14152 system_pods.go:61] "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
I0930 10:21:42.872284 14152 system_pods.go:61] "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0930 10:21:42.872294 14152 system_pods.go:61] "snapshot-controller-56fcc65765-4tnvr" [b8ea90d4-4a6a-4c29-b153-af3b944a30d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:42.872306 14152 system_pods.go:61] "snapshot-controller-56fcc65765-9sn5g" [9c07447c-6747-44fc-959a-b5b2e5744ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:42.872315 14152 system_pods.go:61] "storage-provisioner" [5a4ed029-4ecb-43ef-a31f-410f1039bb84] Running
I0930 10:21:42.872324 14152 system_pods.go:74] duration metric: took 184.217541ms to wait for pod list to return data ...
I0930 10:21:42.872336 14152 default_sa.go:34] waiting for default service account to be created ...
I0930 10:21:42.887082 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:42.914984 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:43.075588 14152 default_sa.go:45] found service account: "default"
I0930 10:21:43.075646 14152 default_sa.go:55] duration metric: took 203.274919ms for default service account to be created ...
I0930 10:21:43.075660 14152 system_pods.go:116] waiting for k8s-apps to be running ...
I0930 10:21:43.272061 14152 system_pods.go:86] 16 kube-system pods found
I0930 10:21:43.272087 14152 system_pods.go:89] "coredns-7c65d6cfc9-l6kz2" [8c9f80b9-eea9-44a8-815c-69b4dcceecf9] Running
I0930 10:21:43.272095 14152 system_pods.go:89] "csi-hostpath-attacher-0" [8baace2a-d4f6-46fc-906b-a4fb78e8e517] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0930 10:21:43.272101 14152 system_pods.go:89] "csi-hostpath-resizer-0" [affbdd42-4430-4ff1-a978-941e66701b22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0930 10:21:43.272108 14152 system_pods.go:89] "csi-hostpathplugin-6dwlc" [ff93f1cc-212a-41e5-be3e-db0842b636c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0930 10:21:43.272112 14152 system_pods.go:89] "etcd-ubuntu-20-agent-2" [60eb3919-ff73-4033-b678-1f1dc0a96b49] Running
I0930 10:21:43.272116 14152 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [000c5b05-949f-4593-88f6-17f1d9d1342a] Running
I0930 10:21:43.272121 14152 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [7a0e0e79-569c-4842-8d03-c2f0d5aa842c] Running
I0930 10:21:43.272124 14152 system_pods.go:89] "kube-proxy-6zcvv" [fde2c7b5-3e42-48a5-9b2a-670fc6e8e59f] Running
I0930 10:21:43.272128 14152 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [0f7cf227-109f-4118-8f0d-13b56957b763] Running
I0930 10:21:43.272133 14152 system_pods.go:89] "metrics-server-84c5f94fbc-k6tlb" [9dab8e12-be75-43d4-b706-334dbdb7b9c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0930 10:21:43.272139 14152 system_pods.go:89] "nvidia-device-plugin-daemonset-6496t" [5671e02b-bae3-433f-98c5-56b427f3e666] Running
I0930 10:21:43.272143 14152 system_pods.go:89] "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
I0930 10:21:43.272152 14152 system_pods.go:89] "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0930 10:21:43.272157 14152 system_pods.go:89] "snapshot-controller-56fcc65765-4tnvr" [b8ea90d4-4a6a-4c29-b153-af3b944a30d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:43.272163 14152 system_pods.go:89] "snapshot-controller-56fcc65765-9sn5g" [9c07447c-6747-44fc-959a-b5b2e5744ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0930 10:21:43.272166 14152 system_pods.go:89] "storage-provisioner" [5a4ed029-4ecb-43ef-a31f-410f1039bb84] Running
I0930 10:21:43.272173 14152 system_pods.go:126] duration metric: took 196.507996ms to wait for k8s-apps to be running ...
I0930 10:21:43.272182 14152 system_svc.go:44] waiting for kubelet service to be running ....
I0930 10:21:43.272221 14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0930 10:21:43.284299 14152 system_svc.go:56] duration metric: took 12.107947ms WaitForService to wait for kubelet
I0930 10:21:43.284324 14152 kubeadm.go:582] duration metric: took 11.54517909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0930 10:21:43.284341 14152 node_conditions.go:102] verifying NodePressure condition ...
I0930 10:21:43.387174 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:43.413986 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:43.468658 14152 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0930 10:21:43.468686 14152 node_conditions.go:123] node cpu capacity is 8
I0930 10:21:43.468700 14152 node_conditions.go:105] duration metric: took 184.353931ms to run NodePressure ...
I0930 10:21:43.468713 14152 start.go:241] waiting for startup goroutines ...
I0930 10:21:43.887566 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:43.914626 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:44.387061 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:44.414212 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:44.887302 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:44.914239 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:45.387498 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:45.414401 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:45.888166 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:45.914623 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:46.387873 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:46.413653 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:46.888553 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:46.914793 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:47.386706 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:47.415445 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:47.887357 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:47.914499 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:48.387088 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:48.414089 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0930 10:21:48.887238 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:48.914475 14152 kapi.go:107] duration metric: took 16.003667595s to wait for kubernetes.io/minikube-addons=registry ...
I0930 10:21:49.387820 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:49.887870 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:50.390080 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:50.887211 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:51.387001 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:51.887356 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:52.388486 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:52.888370 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:53.387314 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:53.887933 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:54.387564 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:54.887466 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:55.388456 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:55.887949 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:56.387536 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:56.887445 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:57.387827 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:57.887791 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:58.387647 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:58.890090 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:59.386963 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:21:59.888054 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:00.414086 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:00.887718 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:01.387129 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:01.887646 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:02.388034 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:02.887363 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:03.387921 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:03.921339 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:04.387348 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:04.886806 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:05.388074 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:05.887632 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:06.476475 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:06.887462 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:07.386764 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:07.888203 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:08.387043 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:08.887380 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:09.387364 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:09.886516 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0930 10:22:10.387144 14152 kapi.go:107] duration metric: took 35.004322995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0930 10:22:21.056369 14152 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0930 10:22:21.056388 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:21.556702 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:22.057126 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:22.556498 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:23.056398 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:23.556642 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:24.056477 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:24.556553 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:25.056344 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:25.556740 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:26.056588 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:26.556915 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:27.056079 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:27.556138 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:28.055755 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:28.555764 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:29.056644 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:29.556533 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:30.056334 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:30.556625 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:31.056506 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:31.556687 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:32.056094 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:32.556298 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:33.056080 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:33.556233 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:34.056285 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:34.556758 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:35.056677 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:35.556632 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:36.056502 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:36.556981 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:37.056121 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:37.555730 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:38.056569 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:38.557094 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:39.055853 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:39.555801 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:40.055494 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:40.556678 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:41.056533 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:41.557027 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:42.056223 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:42.556330 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:43.055798 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:43.556492 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:44.056837 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:44.557126 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:45.056064 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:45.556295 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:46.055982 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:46.556062 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:47.056232 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:47.555993 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:48.055824 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:48.556099 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:49.056028 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:49.556018 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:50.055709 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:50.556839 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:51.055741 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:51.556974 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:52.056429 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:52.557247 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:53.056184 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:53.556889 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:54.055609 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:54.556482 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:55.056927 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:55.556170 14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0930 10:22:56.056244 14152 kapi.go:107] duration metric: took 1m16.503281291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0930 10:22:56.058066 14152 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0930 10:22:56.059421 14152 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0930 10:22:56.060906 14152 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0930 10:22:56.062295 14152 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, storage-provisioner-rancher, yakd, metrics-server, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0930 10:22:56.063572 14152 addons.go:510] duration metric: took 1m24.337731848s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner storage-provisioner-rancher yakd metrics-server inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0930 10:22:56.063619 14152 start.go:246] waiting for cluster config update ...
I0930 10:22:56.063635 14152 start.go:255] writing updated cluster config ...
I0930 10:22:56.063877 14152 exec_runner.go:51] Run: rm -f paused
I0930 10:22:56.107264 14152 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0930 10:22:56.109227 14152 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Mon 2024-08-19 17:40:18 UTC, end at Mon 2024-09-30 10:32:47 UTC. --
Sep 30 10:23:51 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:51.152170675Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c725380adcbf0519 traceID=48d6c4a189c6ff22302e4d9f6e51e976
Sep 30 10:23:51 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:51.154559396Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c725380adcbf0519 traceID=48d6c4a189c6ff22302e4d9f6e51e976
Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.100954293Z" level=info msg="Container failed to exit within 30s of signal 3 - using the force" container=446753e1dbd953f28ff5a38d76a0c59eb2361967a10e898873aa5da832b7fd83 spanID=77213cd02bf55884 traceID=c639da10f8cf5f0b2b4cc2cd5761778d
Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.120697356Z" level=info msg="ignoring event" container=446753e1dbd953f28ff5a38d76a0c59eb2361967a10e898873aa5da832b7fd83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.246742173Z" level=info msg="ignoring event" container=226312f846e45b5af3e224d96d56822ab3610315c2e1a583e7ec2271322969dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:24:16 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:16.149232628Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5c1158a38c234ed4 traceID=7582d0044b1171e5bed87fe3b5e1089e
Sep 30 10:24:16 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:16.151509823Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5c1158a38c234ed4 traceID=7582d0044b1171e5bed87fe3b5e1089e
Sep 30 10:24:57 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:57.152950035Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4deb59e3d585f351 traceID=004ea83b82c811b58b74856797c33229
Sep 30 10:24:57 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:57.155044385Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4deb59e3d585f351 traceID=004ea83b82c811b58b74856797c33229
Sep 30 10:26:28 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:26:28.167760918Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9c5265e26501244c traceID=7fe14e20482f9144417a6de48dd1603d
Sep 30 10:26:28 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:26:28.170085159Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9c5265e26501244c traceID=7fe14e20482f9144417a6de48dd1603d
Sep 30 10:29:14 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:29:14.158227679Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=1fdb7dd66d5d39ec traceID=40288c81b041285da5047257a1908e1e
Sep 30 10:29:14 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:29:14.160726753Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=1fdb7dd66d5d39ec traceID=40288c81b041285da5047257a1908e1e
Sep 30 10:31:47 ubuntu-20-agent-2 cri-dockerd[15137]: time="2024-09-30T10:31:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/053317de53afc515d061cd0225fdeb9d78a0cc0484b18ca973a72ddb3d6100bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 30 10:31:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:31:47.419726434Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5db194501f01e803 traceID=aa4eee7aa649e59274183bad5b2875ad
Sep 30 10:31:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:31:47.421972810Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5db194501f01e803 traceID=aa4eee7aa649e59274183bad5b2875ad
Sep 30 10:32:03 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:03.155350329Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cf227ef6a0193812 traceID=d98514dd854b9e5f6abed4e865959477
Sep 30 10:32:03 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:03.157451111Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cf227ef6a0193812 traceID=d98514dd854b9e5f6abed4e865959477
Sep 30 10:32:32 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:32.147733631Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d0127899ce499d03 traceID=4565e20d393b06e158ee815dbef9ea4b
Sep 30 10:32:32 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:32.149880727Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d0127899ce499d03 traceID=4565e20d393b06e158ee815dbef9ea4b
Sep 30 10:32:46 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:46.883672659Z" level=info msg="ignoring event" container=053317de53afc515d061cd0225fdeb9d78a0cc0484b18ca973a72ddb3d6100bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.143441131Z" level=info msg="ignoring event" container=ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.205040928Z" level=info msg="ignoring event" container=20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.278578428Z" level=info msg="ignoring event" container=e6aa8711c0f3b3400e09abe872b72f1e773279268095a5de6aee87c11c464933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.367901138Z" level=info msg="ignoring event" container=00bfe4005c12f489a53e983131546cdafc15f2e7857d73beb19e7a31277b2c77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
77d3a63a4499d gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 dde583950cee8 gcp-auth-89d5ffd79-ncvcd
2715aeb1faecb registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
445379e1f78c2 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
fb526e3386c16 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
151aa322ca049 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
16030347ece15 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
1ff3076f030d2 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 78fbc418d73c9 csi-hostpathplugin-6dwlc
4b3a822c6afae registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 15f462964a9e7 csi-hostpath-attacher-0
8e1cb05130fa6 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 3a95adbe7ba81 csi-hostpath-resizer-0
a4b3036371375 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 3b693fa000d24 snapshot-controller-56fcc65765-9sn5g
3d96127fdb7f6 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 e202dc2df4c12 snapshot-controller-56fcc65765-4tnvr
2cfb8b31a2a3f registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 59c4aa91b3d85 metrics-server-84c5f94fbc-k6tlb
a8fde3e4664b0 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 69560888b64c5 yakd-dashboard-67d98fc6b-zp446
eb6fea0098023 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 10 minutes ago Running gadget 0 0a0ef867c54cb gadget-gnw75
275554e8ee343 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 11 minutes ago Running local-path-provisioner 0 a35558757f855 local-path-provisioner-86d989889c-gfrcs
7a6d235f7418d gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 11 minutes ago Running cloud-spanner-emulator 0 98cb99fb801c1 cloud-spanner-emulator-5b584cc74-56ngp
fc058dacc50a9 nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 0449a34351598 nvidia-device-plugin-daemonset-6496t
787ae08a32416 6e38f40d628db 11 minutes ago Running storage-provisioner 0 2e20d6e48bd87 storage-provisioner
813e6bc8f907c c69fa2e9cbf5f 11 minutes ago Running coredns 0 366d678fb0d89 coredns-7c65d6cfc9-l6kz2
f353b443fe2db 60c005f310ff3 11 minutes ago Running kube-proxy 0 21b50e65d5600 kube-proxy-6zcvv
bfc730f14072e 2e96e5913fc06 11 minutes ago Running etcd 0 97839c430679a etcd-ubuntu-20-agent-2
829177e649efe 6bab7719df100 11 minutes ago Running kube-apiserver 0 256a6621c886c kube-apiserver-ubuntu-20-agent-2
ded343e5a89cc 9aa1fad941575 11 minutes ago Running kube-scheduler 0 b5b4a6390da5d kube-scheduler-ubuntu-20-agent-2
1279df1159b2f 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 fdafef1c1cd9f kube-controller-manager-ubuntu-20-agent-2
==> coredns [813e6bc8f907] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:60216 - 7481 "HINFO IN 3248328176289781825.6718732232653688194. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036004084s
[INFO] 10.244.0.23:51311 - 33272 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000251243s
[INFO] 10.244.0.23:42563 - 41308 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000353657s
[INFO] 10.244.0.23:49151 - 33869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009722s
[INFO] 10.244.0.23:51011 - 17223 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104725s
[INFO] 10.244.0.23:32768 - 60037 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152797s
[INFO] 10.244.0.23:52965 - 35283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146158s
[INFO] 10.244.0.23:59149 - 34124 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004172305s
[INFO] 10.244.0.23:34492 - 48188 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004230952s
[INFO] 10.244.0.23:52366 - 50679 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003444693s
[INFO] 10.244.0.23:49279 - 38702 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003611762s
[INFO] 10.244.0.23:53116 - 12018 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003551489s
[INFO] 10.244.0.23:59171 - 58768 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.008509604s
[INFO] 10.244.0.23:40764 - 38481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002620262s
[INFO] 10.244.0.23:54666 - 36232 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002805743s
==> describe nodes <==
Name: ubuntu-20-agent-2
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-2
kubernetes.io/os=linux
minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-2
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 30 Sep 2024 10:21:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-2
AcquireTime: <unset>
RenewTime: Mon, 30 Sep 2024 10:32:38 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 30 Sep 2024 10:28:34 +0000 Mon, 30 Sep 2024 10:21:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 30 Sep 2024 10:28:34 +0000 Mon, 30 Sep 2024 10:21:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 30 Sep 2024 10:28:34 +0000 Mon, 30 Sep 2024 10:21:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 30 Sep 2024 10:28:34 +0000 Mon, 30 Sep 2024 10:21:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.138.0.48
Hostname: ubuntu-20-agent-2
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859316Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 1ec29a5c-5f40-e854-ccac-68a60c2524db
Boot ID: ef9eed15-051c-4afe-8634-23d275b24342
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-5b584cc74-56ngp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-gnw75 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-ncvcd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-l6kz2 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-6dwlc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-2 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-2 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-2 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-6zcvv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-2 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-k6tlb 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-6496t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-4tnvr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-9sn5g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-gfrcs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-zp446 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal NodeHasSufficientMemory 11m (x8 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x7 over 11m) kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 1f df c1 20 21 08 06
[ +0.012234] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 7f cf 3e b3 f1 08 06
[ +2.637486] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a bc c5 bb 1d c6 08 06
[ +1.474121] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 98 60 f4 2d 64 08 06
[Sep30 10:22] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e a7 31 e9 7b bc 08 06
[ +4.437769] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 6c ca d8 fe ac 08 06
[ +0.269653] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 a2 1c 11 ee 45 08 06
[ +0.149081] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 3a c0 7f 02 cb 08 06
[ +1.283154] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 5b ce 3f 31 1a 08 06
[ +35.911599] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 90 c2 a2 91 4a 08 06
[ +0.020153] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 52 05 90 ac 35 08 06
[ +11.234903] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 fa 93 fb 23 2f 08 06
[ +0.000451] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a dc 54 7a c1 80 08 06
==> etcd [bfc730f14072] <==
{"level":"info","ts":"2024-09-30T10:21:22.840570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
{"level":"info","ts":"2024-09-30T10:21:22.840637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-30T10:21:22.840667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
{"level":"info","ts":"2024-09-30T10:21:22.840684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
{"level":"info","ts":"2024-09-30T10:21:22.840696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-30T10:21:22.840712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
{"level":"info","ts":"2024-09-30T10:21:22.840726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
{"level":"info","ts":"2024-09-30T10:21:22.841556Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:22.842131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-30T10:21:22.842135Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-30T10:21:22.842158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-30T10:21:22.842359Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-30T10:21:22.842427Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-30T10:21:22.842458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:22.842527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:22.842549Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-30T10:21:22.843194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-30T10:21:22.843227Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-30T10:21:22.843978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
{"level":"info","ts":"2024-09-30T10:21:22.843997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2024-09-30T10:21:34.862375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.293496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-30T10:21:34.862463Z","caller":"traceutil/trace.go:171","msg":"trace[867252267] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:724; }","duration":"106.395178ms","start":"2024-09-30T10:21:34.756056Z","end":"2024-09-30T10:21:34.862451Z","steps":["trace[867252267] 'agreement among raft nodes before linearized reading' (duration: 106.266644ms)"],"step_count":1}
{"level":"info","ts":"2024-09-30T10:31:22.861908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1694}
{"level":"info","ts":"2024-09-30T10:31:22.885267Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1694,"took":"22.883896ms","hash":3580243840,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4165632,"current-db-size-in-use":"4.2 MB"}
{"level":"info","ts":"2024-09-30T10:31:22.885310Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3580243840,"revision":1694,"compact-revision":-1}
==> gcp-auth [77d3a63a4499] <==
2024/09/30 10:22:55 GCP Auth Webhook started!
2024/09/30 10:23:11 Ready to marshal response ...
2024/09/30 10:23:11 Ready to write response ...
2024/09/30 10:23:12 Ready to marshal response ...
2024/09/30 10:23:12 Ready to write response ...
2024/09/30 10:23:34 Ready to marshal response ...
2024/09/30 10:23:34 Ready to write response ...
2024/09/30 10:23:34 Ready to marshal response ...
2024/09/30 10:23:34 Ready to write response ...
2024/09/30 10:23:34 Ready to marshal response ...
2024/09/30 10:23:34 Ready to write response ...
2024/09/30 10:31:46 Ready to marshal response ...
2024/09/30 10:31:46 Ready to write response ...
==> kernel <==
10:32:47 up 15 min, 0 users, load average: 0.32, 0.75, 0.52
Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [829177e649ef] <==
W0930 10:22:13.859640 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.131.47:443: connect: connection refused
W0930 10:22:20.564751 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
E0930 10:22:20.564784 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
W0930 10:22:42.572196 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
E0930 10:22:42.572229 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
W0930 10:22:42.581501 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
E0930 10:22:42.581533 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
I0930 10:23:11.372229 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0930 10:23:11.387369 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0930 10:23:23.785822 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0930 10:23:23.806841 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0930 10:23:23.924090 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0930 10:23:23.925899 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0930 10:23:23.925952 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0930 10:23:23.991148 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0930 10:23:24.073457 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0930 10:23:24.100494 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0930 10:23:24.182098 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0930 10:23:24.955804 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0930 10:23:24.992036 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0930 10:23:25.106367 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0930 10:23:25.106986 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0930 10:23:25.182336 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0930 10:23:25.182344 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0930 10:23:25.356355 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [1279df1159b2] <==
W0930 10:31:29.795582 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:29.795623 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:31:30.359398 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:30.359439 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:31:38.115780 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:38.115823 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:31:45.683174 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:45.683233 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:31:49.995118 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:49.995162 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:31:56.465375 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:31:56.465424 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:02.155503 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:02.155545 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:24.999175 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:24.999228 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:27.958873 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:27.958919 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:30.092646 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:30.092691 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:40.629001 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:40.629045 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0930 10:32:41.395056 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0930 10:32:41.395100 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0930 10:32:47.107599 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="11.091µs"
==> kube-proxy [f353b443fe2d] <==
I0930 10:21:32.450974 1 server_linux.go:66] "Using iptables proxy"
I0930 10:21:32.765348 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
E0930 10:21:32.765432 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0930 10:21:32.875103 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0930 10:21:32.875165 1 server_linux.go:169] "Using iptables Proxier"
I0930 10:21:32.878099 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0930 10:21:32.878448 1 server.go:483] "Version info" version="v1.31.1"
I0930 10:21:32.878477 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0930 10:21:32.880594 1 config.go:199] "Starting service config controller"
I0930 10:21:32.880621 1 shared_informer.go:313] Waiting for caches to sync for service config
I0930 10:21:32.880661 1 config.go:105] "Starting endpoint slice config controller"
I0930 10:21:32.880667 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0930 10:21:32.881115 1 config.go:328] "Starting node config controller"
I0930 10:21:32.881122 1 shared_informer.go:313] Waiting for caches to sync for node config
I0930 10:21:32.981720 1 shared_informer.go:320] Caches are synced for node config
I0930 10:21:32.981766 1 shared_informer.go:320] Caches are synced for service config
I0930 10:21:32.981798 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [ded343e5a89c] <==
E0930 10:21:23.719727 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.719642 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0930 10:21:23.719829 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0930 10:21:23.719850 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.719677 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0930 10:21:23.719888 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.719653 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0930 10:21:23.719918 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.719741 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0930 10:21:23.719954 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:23.719820 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0930 10:21:23.719982 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.588445 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0930 10:21:24.588486 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0930 10:21:24.629142 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0930 10:21:24.629183 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.682723 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0930 10:21:24.682765 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.686030 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0930 10:21:24.686070 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.698291 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0930 10:21:24.698334 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0930 10:21:24.758944 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0930 10:21:24.758996 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0930 10:21:26.717350 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Mon 2024-08-19 17:40:18 UTC, end at Mon 2024-09-30 10:32:48 UTC. --
Sep 30 10:32:37 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:37.009566 16031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b0e36b4a-71a7-4915-8c32-b4be6cd9aa5a"
Sep 30 10:32:43 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:43.009871 16031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="476ee8f0-d7e7-4a87-9b58-c2082f236775"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043730 16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds\") pod \"476ee8f0-d7e7-4a87-9b58-c2082f236775\" (UID: \"476ee8f0-d7e7-4a87-9b58-c2082f236775\") "
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043802 16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6lf\" (UniqueName: \"kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf\") pod \"476ee8f0-d7e7-4a87-9b58-c2082f236775\" (UID: \"476ee8f0-d7e7-4a87-9b58-c2082f236775\") "
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043812 16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "476ee8f0-d7e7-4a87-9b58-c2082f236775" (UID: "476ee8f0-d7e7-4a87-9b58-c2082f236775"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043906 16031 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.045538 16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf" (OuterVolumeSpecName: "kube-api-access-fd6lf") pod "476ee8f0-d7e7-4a87-9b58-c2082f236775" (UID: "476ee8f0-d7e7-4a87-9b58-c2082f236775"). InnerVolumeSpecName "kube-api-access-fd6lf". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.144495 16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fd6lf\" (UniqueName: \"kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.445994 16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqn5\" (UniqueName: \"kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5\") pod \"f279ea6c-0d65-4d94-9dc1-43ba6d130381\" (UID: \"f279ea6c-0d65-4d94-9dc1-43ba6d130381\") "
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.448129 16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5" (OuterVolumeSpecName: "kube-api-access-vpqn5") pod "f279ea6c-0d65-4d94-9dc1-43ba6d130381" (UID: "f279ea6c-0d65-4d94-9dc1-43ba6d130381"). InnerVolumeSpecName "kube-api-access-vpqn5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.546849 16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqwt\" (UniqueName: \"kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt\") pod \"3bd51464-305d-4990-aed6-cb08ea16c1b9\" (UID: \"3bd51464-305d-4990-aed6-cb08ea16c1b9\") "
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.547080 16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vpqn5\" (UniqueName: \"kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.548881 16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt" (OuterVolumeSpecName: "kube-api-access-qjqwt") pod "3bd51464-305d-4990-aed6-cb08ea16c1b9" (UID: "3bd51464-305d-4990-aed6-cb08ea16c1b9"). InnerVolumeSpecName "kube-api-access-qjqwt". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.647250 16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qjqwt\" (UniqueName: \"kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.653539 16031 scope.go:117] "RemoveContainer" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.670614 16031 scope.go:117] "RemoveContainer" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:47.671384 16031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.671424 16031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"} err="failed to get container status \"ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305\": rpc error: code = Unknown desc = Error response from daemon: No such container: ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.671452 16031 scope.go:117] "RemoveContainer" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.687273 16031 scope.go:117] "RemoveContainer" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:47.688296 16031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.688513 16031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"} err="failed to get container status \"20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500\": rpc error: code = Unknown desc = Error response from daemon: No such container: 20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.017988 16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd51464-305d-4990-aed6-cb08ea16c1b9" path="/var/lib/kubelet/pods/3bd51464-305d-4990-aed6-cb08ea16c1b9/volumes"
Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.018312 16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476ee8f0-d7e7-4a87-9b58-c2082f236775" path="/var/lib/kubelet/pods/476ee8f0-d7e7-4a87-9b58-c2082f236775/volumes"
Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.018500 16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f279ea6c-0d65-4d94-9dc1-43ba6d130381" path="/var/lib/kubelet/pods/f279ea6c-0d65-4d94-9dc1-43ba6d130381/volumes"
==> storage-provisioner [787ae08a3241] <==
I0930 10:21:34.216352 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0930 10:21:34.233408 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0930 10:21:34.233460 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0930 10:21:34.246641 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0930 10:21:34.246864 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc!
I0930 10:21:34.248241 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12f047f2-6055-43ed-8ded-62a38c2a34fb", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc became leader
I0930 10:21:34.347468 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-2/10.138.0.48
Start Time: Mon, 30 Sep 2024 10:23:34 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kjmxj (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-kjmxj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-2
Normal Pulling 7m51s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m51s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m51s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m24s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m8s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.81s)