=== RUN TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.777295ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003578182s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00358376s
addons_test.go:342: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.079565408s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/06 18:43:04 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:390: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:41257 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:30 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | 06 Sep 24 18:30 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 06 Sep 24 18:30 UTC | 06 Sep 24 18:33 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 06 Sep 24 18:33 UTC | 06 Sep 24 18:33 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/06 18:30:16
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0906 18:30:16.432171 16537 out.go:345] Setting OutFile to fd 1 ...
I0906 18:30:16.432394 16537 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:30:16.432403 16537 out.go:358] Setting ErrFile to fd 2...
I0906 18:30:16.432408 16537 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:30:16.432615 16537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-5859/.minikube/bin
I0906 18:30:16.433214 16537 out.go:352] Setting JSON to false
I0906 18:30:16.434103 16537 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":758,"bootTime":1725646658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0906 18:30:16.434161 16537 start.go:139] virtualization: kvm guest
I0906 18:30:16.436247 16537 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0906 18:30:16.437386 16537 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-5859/.minikube/cache/preloaded-tarball: no such file or directory
I0906 18:30:16.437429 16537 out.go:177] - MINIKUBE_LOCATION=19576
I0906 18:30:16.437433 16537 notify.go:220] Checking for updates...
I0906 18:30:16.438952 16537 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0906 18:30:16.440252 16537 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19576-5859/kubeconfig
I0906 18:30:16.441596 16537 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-5859/.minikube
I0906 18:30:16.442748 16537 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0906 18:30:16.443916 16537 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0906 18:30:16.445141 16537 driver.go:394] Setting default libvirt URI to qemu:///system
I0906 18:30:16.454936 16537 out.go:177] * Using the none driver based on user configuration
I0906 18:30:16.456069 16537 start.go:297] selected driver: none
I0906 18:30:16.456089 16537 start.go:901] validating driver "none" against <nil>
I0906 18:30:16.456100 16537 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0906 18:30:16.456128 16537 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0906 18:30:16.456397 16537 out.go:270] ! The 'none' driver does not respect the --memory flag
I0906 18:30:16.456931 16537 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0906 18:30:16.457143 16537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0906 18:30:16.457199 16537 cni.go:84] Creating CNI manager for ""
I0906 18:30:16.457218 16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0906 18:30:16.457226 16537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0906 18:30:16.457272 16537 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0906 18:30:16.458867 16537 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0906 18:30:16.460413 16537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json ...
I0906 18:30:16.460444 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json: {Name:mkdd137618f698f37b9d6029ffe3cabfeea10cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:16.460577 16537 start.go:360] acquireMachinesLock for minikube: {Name:mk32cf99842cb1a787bdad14db608c92d2701216 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0906 18:30:16.460614 16537 start.go:364] duration metric: took 19.25µs to acquireMachinesLock for "minikube"
I0906 18:30:16.460632 16537 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0906 18:30:16.460716 16537 start.go:125] createHost starting for "" (driver="none")
I0906 18:30:16.462209 16537 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0906 18:30:16.463460 16537 exec_runner.go:51] Run: systemctl --version
I0906 18:30:16.466004 16537 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0906 18:30:16.466050 16537 client.go:168] LocalClient.Create starting
I0906 18:30:16.466147 16537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca.pem
I0906 18:30:16.466187 16537 main.go:141] libmachine: Decoding PEM data...
I0906 18:30:16.466213 16537 main.go:141] libmachine: Parsing certificate...
I0906 18:30:16.466269 16537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-5859/.minikube/certs/cert.pem
I0906 18:30:16.466293 16537 main.go:141] libmachine: Decoding PEM data...
I0906 18:30:16.466309 16537 main.go:141] libmachine: Parsing certificate...
I0906 18:30:16.466667 16537 client.go:171] duration metric: took 606.29µs to LocalClient.Create
I0906 18:30:16.466690 16537 start.go:167] duration metric: took 688.493µs to libmachine.API.Create "minikube"
I0906 18:30:16.466696 16537 start.go:293] postStartSetup for "minikube" (driver="none")
I0906 18:30:16.466738 16537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0906 18:30:16.466794 16537 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0906 18:30:16.475722 16537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0906 18:30:16.475757 16537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0906 18:30:16.475775 16537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0906 18:30:16.477856 16537 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0906 18:30:16.479126 16537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-5859/.minikube/addons for local assets ...
I0906 18:30:16.479195 16537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-5859/.minikube/files for local assets ...
I0906 18:30:16.479226 16537 start.go:296] duration metric: took 12.524095ms for postStartSetup
I0906 18:30:16.479820 16537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/config.json ...
I0906 18:30:16.479964 16537 start.go:128] duration metric: took 19.238821ms to createHost
I0906 18:30:16.479977 16537 start.go:83] releasing machines lock for "minikube", held for 19.351879ms
I0906 18:30:16.480315 16537 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0906 18:30:16.480394 16537 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0906 18:30:16.482200 16537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0906 18:30:16.482250 16537 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0906 18:30:16.491093 16537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0906 18:30:16.491116 16537 start.go:495] detecting cgroup driver to use...
I0906 18:30:16.491145 16537 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0906 18:30:16.491309 16537 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0906 18:30:16.510925 16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0906 18:30:16.520059 16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0906 18:30:16.529348 16537 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0906 18:30:16.529403 16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0906 18:30:16.538116 16537 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0906 18:30:16.547347 16537 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0906 18:30:16.556188 16537 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0906 18:30:16.567992 16537 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0906 18:30:16.576909 16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0906 18:30:16.585687 16537 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0906 18:30:16.594479 16537 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0906 18:30:16.603324 16537 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0906 18:30:16.611285 16537 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0906 18:30:16.618460 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:16.847673 16537 exec_runner.go:51] Run: sudo systemctl restart containerd
I0906 18:30:16.973791 16537 start.go:495] detecting cgroup driver to use...
I0906 18:30:16.973847 16537 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0906 18:30:16.973956 16537 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0906 18:30:16.993887 16537 exec_runner.go:51] Run: which cri-dockerd
I0906 18:30:16.995134 16537 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0906 18:30:17.003608 16537 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0906 18:30:17.003646 16537 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0906 18:30:17.003696 16537 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0906 18:30:17.012186 16537 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0906 18:30:17.012343 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2291484935 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0906 18:30:17.021886 16537 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0906 18:30:17.256569 16537 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0906 18:30:17.476575 16537 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0906 18:30:17.476726 16537 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0906 18:30:17.476741 16537 exec_runner.go:203] rm: /etc/docker/daemon.json
I0906 18:30:17.476785 16537 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0906 18:30:17.485966 16537 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0906 18:30:17.486113 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1332208072 /etc/docker/daemon.json
I0906 18:30:17.494142 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:17.722095 16537 exec_runner.go:51] Run: sudo systemctl restart docker
I0906 18:30:18.102750 16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0906 18:30:18.113702 16537 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0906 18:30:18.128838 16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0906 18:30:18.140312 16537 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0906 18:30:18.345976 16537 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0906 18:30:18.572578 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:18.794135 16537 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0906 18:30:18.808429 16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0906 18:30:18.819316 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:19.041063 16537 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0906 18:30:19.109832 16537 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0906 18:30:19.109903 16537 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0906 18:30:19.111678 16537 start.go:563] Will wait 60s for crictl version
I0906 18:30:19.111723 16537 exec_runner.go:51] Run: which crictl
I0906 18:30:19.112540 16537 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0906 18:30:19.142353 16537 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.2.0
RuntimeApiVersion: v1
I0906 18:30:19.142410 16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0906 18:30:19.165488 16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0906 18:30:19.189803 16537 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
I0906 18:30:19.189878 16537 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0906 18:30:19.192954 16537 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0906 18:30:19.194423 16537 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0906 18:30:19.194550 16537 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0906 18:30:19.194572 16537 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.0 docker true true} ...
I0906 18:30:19.194673 16537 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0906 18:30:19.194723 16537 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0906 18:30:19.239708 16537 cni.go:84] Creating CNI manager for ""
I0906 18:30:19.239740 16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0906 18:30:19.239775 16537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0906 18:30:19.239804 16537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0906 18:30:19.239987 16537 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-9"
kubeletExtraArgs:
node-ip: 10.154.0.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0906 18:30:19.240071 16537 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
I0906 18:30:19.248453 16537 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
Initiating transfer...
I0906 18:30:19.248509 16537 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
I0906 18:30:19.256879 16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
I0906 18:30:19.256889 16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
I0906 18:30:19.256889 16537 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
I0906 18:30:19.256933 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
I0906 18:30:19.256934 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
I0906 18:30:19.256941 16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0906 18:30:19.269236 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
I0906 18:30:19.305762 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3584136102 /var/lib/minikube/binaries/v1.31.0/kubeadm
I0906 18:30:19.320789 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1871334178 /var/lib/minikube/binaries/v1.31.0/kubectl
I0906 18:30:19.349277 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube899051023 /var/lib/minikube/binaries/v1.31.0/kubelet
I0906 18:30:19.413144 16537 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0906 18:30:19.421525 16537 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0906 18:30:19.421549 16537 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0906 18:30:19.421583 16537 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0906 18:30:19.429379 16537 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I0906 18:30:19.429515 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233444896 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0906 18:30:19.438680 16537 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0906 18:30:19.438703 16537 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0906 18:30:19.438735 16537 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0906 18:30:19.446468 16537 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0906 18:30:19.446596 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1209045392 /lib/systemd/system/kubelet.service
I0906 18:30:19.454832 16537 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
I0906 18:30:19.454951 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2862409953 /var/tmp/minikube/kubeadm.yaml.new
I0906 18:30:19.465359 16537 exec_runner.go:51] Run: grep 10.154.0.4 control-plane.minikube.internal$ /etc/hosts
I0906 18:30:19.466601 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:19.684681 16537 exec_runner.go:51] Run: sudo systemctl start kubelet
I0906 18:30:19.699588 16537 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube for IP: 10.154.0.4
I0906 18:30:19.699610 16537 certs.go:194] generating shared ca certs ...
I0906 18:30:19.699632 16537 certs.go:226] acquiring lock for ca certs: {Name:mk556d199463ef19f85b68a44414038534ced562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:19.699774 16537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.key
I0906 18:30:19.699817 16537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.key
I0906 18:30:19.699827 16537 certs.go:256] generating profile certs ...
I0906 18:30:19.699878 16537 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key
I0906 18:30:19.699891 16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt with IP's: []
I0906 18:30:19.797572 16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt ...
I0906 18:30:19.797601 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.crt: {Name:mkbea853624ff1b416b05c7051e56df9f1e6e48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:19.797739 16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key ...
I0906 18:30:19.797752 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/client.key: {Name:mk25f9732ca0079107ee76d2f71d190e1415951e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:19.797822 16537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6
I0906 18:30:19.797837 16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
I0906 18:30:19.862150 16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
I0906 18:30:19.862181 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mkdcf29800e713f1ce977e508e4cdbc700b2cb92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:19.862317 16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
I0906 18:30:19.862327 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk735b881f868df870d87d4161b62ab9b9a083d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:19.862378 16537 certs.go:381] copying /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt
I0906 18:30:19.862446 16537 certs.go:385] copying /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key
I0906 18:30:19.862498 16537 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key
I0906 18:30:19.862511 16537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0906 18:30:20.005278 16537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt ...
I0906 18:30:20.005308 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt: {Name:mke67782129f4169dc8d462ad2039289f538d003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:20.005461 16537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key ...
I0906 18:30:20.005472 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key: {Name:mk3b0da6b806d9ce972a7bd1c9549e30eb90e836 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:20.005627 16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca-key.pem (1675 bytes)
I0906 18:30:20.005660 16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/ca.pem (1082 bytes)
I0906 18:30:20.005683 16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/cert.pem (1123 bytes)
I0906 18:30:20.005703 16537 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-5859/.minikube/certs/key.pem (1679 bytes)
I0906 18:30:20.006270 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0906 18:30:20.006379 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2080329162 /var/lib/minikube/certs/ca.crt
I0906 18:30:20.015530 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0906 18:30:20.015709 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1609637301 /var/lib/minikube/certs/ca.key
I0906 18:30:20.024112 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0906 18:30:20.024233 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3717130230 /var/lib/minikube/certs/proxy-client-ca.crt
I0906 18:30:20.033184 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0906 18:30:20.033318 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3095673967 /var/lib/minikube/certs/proxy-client-ca.key
I0906 18:30:20.041781 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0906 18:30:20.041890 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube283663809 /var/lib/minikube/certs/apiserver.crt
I0906 18:30:20.049480 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0906 18:30:20.049606 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2699080327 /var/lib/minikube/certs/apiserver.key
I0906 18:30:20.057407 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0906 18:30:20.057513 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2065321659 /var/lib/minikube/certs/proxy-client.crt
I0906 18:30:20.065215 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0906 18:30:20.065329 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1342280924 /var/lib/minikube/certs/proxy-client.key
I0906 18:30:20.073367 16537 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0906 18:30:20.073385 16537 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.073416 16537 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.081598 16537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-5859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0906 18:30:20.081729 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube562049854 /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.089336 16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0906 18:30:20.089439 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1205555838 /var/lib/minikube/kubeconfig
I0906 18:30:20.097399 16537 exec_runner.go:51] Run: openssl version
I0906 18:30:20.100224 16537 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0906 18:30:20.109385 16537 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.110803 16537 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 6 18:30 /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.110857 16537 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0906 18:30:20.113716 16537 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0906 18:30:20.121732 16537 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0906 18:30:20.122913 16537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0906 18:30:20.123025 16537 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0906 18:30:20.123141 16537 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0906 18:30:20.140016 16537 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0906 18:30:20.148566 16537 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0906 18:30:20.156784 16537 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0906 18:30:20.178343 16537 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0906 18:30:20.187031 16537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0906 18:30:20.187049 16537 kubeadm.go:157] found existing configuration files:
I0906 18:30:20.187091 16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0906 18:30:20.195222 16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0906 18:30:20.195276 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0906 18:30:20.203103 16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0906 18:30:20.210696 16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0906 18:30:20.210737 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0906 18:30:20.217719 16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0906 18:30:20.226032 16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0906 18:30:20.226093 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0906 18:30:20.234617 16537 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0906 18:30:20.242464 16537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0906 18:30:20.242521 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0906 18:30:20.250317 16537 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0906 18:30:20.283837 16537 kubeadm.go:310] W0906 18:30:20.283731 17412 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0906 18:30:20.284372 16537 kubeadm.go:310] W0906 18:30:20.284328 17412 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0906 18:30:20.286218 16537 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
I0906 18:30:20.286242 16537 kubeadm.go:310] [preflight] Running pre-flight checks
I0906 18:30:20.383845 16537 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0906 18:30:20.383982 16537 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0906 18:30:20.383995 16537 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0906 18:30:20.384001 16537 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0906 18:30:20.396335 16537 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0906 18:30:20.400376 16537 out.go:235] - Generating certificates and keys ...
I0906 18:30:20.400441 16537 kubeadm.go:310] [certs] Using existing ca certificate authority
I0906 18:30:20.400462 16537 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0906 18:30:20.748846 16537 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0906 18:30:20.882128 16537 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0906 18:30:21.048605 16537 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0906 18:30:21.137813 16537 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0906 18:30:21.252667 16537 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0906 18:30:21.252751 16537 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0906 18:30:21.304998 16537 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0906 18:30:21.305058 16537 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0906 18:30:21.583104 16537 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0906 18:30:21.674513 16537 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0906 18:30:21.758487 16537 kubeadm.go:310] [certs] Generating "sa" key and public key
I0906 18:30:21.758656 16537 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0906 18:30:21.836872 16537 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0906 18:30:22.122091 16537 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0906 18:30:22.195165 16537 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0906 18:30:22.471931 16537 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0906 18:30:22.603256 16537 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0906 18:30:22.603841 16537 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0906 18:30:22.606236 16537 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0906 18:30:22.608589 16537 out.go:235] - Booting up control plane ...
I0906 18:30:22.608630 16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0906 18:30:22.608657 16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0906 18:30:22.610022 16537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0906 18:30:22.631175 16537 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0906 18:30:22.635936 16537 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0906 18:30:22.635980 16537 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0906 18:30:22.890998 16537 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0906 18:30:22.891024 16537 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0906 18:30:23.392800 16537 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.742085ms
I0906 18:30:23.392822 16537 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0906 18:30:27.894298 16537 kubeadm.go:310] [api-check] The API server is healthy after 4.501453314s
I0906 18:30:27.904807 16537 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0906 18:30:27.915389 16537 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0906 18:30:27.932367 16537 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0906 18:30:27.932387 16537 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0906 18:30:27.939160 16537 kubeadm.go:310] [bootstrap-token] Using token: b9dczz.lnjn4ynjbdzm4imt
I0906 18:30:27.940616 16537 out.go:235] - Configuring RBAC rules ...
I0906 18:30:27.940648 16537 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0906 18:30:27.946018 16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0906 18:30:27.952061 16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0906 18:30:27.954267 16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0906 18:30:27.957389 16537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0906 18:30:27.959683 16537 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0906 18:30:28.300445 16537 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0906 18:30:28.719481 16537 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0906 18:30:29.299366 16537 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0906 18:30:29.300174 16537 kubeadm.go:310]
I0906 18:30:29.300189 16537 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0906 18:30:29.300194 16537 kubeadm.go:310]
I0906 18:30:29.300198 16537 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0906 18:30:29.300202 16537 kubeadm.go:310]
I0906 18:30:29.300205 16537 kubeadm.go:310] mkdir -p $HOME/.kube
I0906 18:30:29.300210 16537 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0906 18:30:29.300213 16537 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0906 18:30:29.300217 16537 kubeadm.go:310]
I0906 18:30:29.300220 16537 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0906 18:30:29.300224 16537 kubeadm.go:310]
I0906 18:30:29.300227 16537 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0906 18:30:29.300230 16537 kubeadm.go:310]
I0906 18:30:29.300234 16537 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0906 18:30:29.300238 16537 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0906 18:30:29.300242 16537 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0906 18:30:29.300245 16537 kubeadm.go:310]
I0906 18:30:29.300250 16537 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0906 18:30:29.300254 16537 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0906 18:30:29.300258 16537 kubeadm.go:310]
I0906 18:30:29.300274 16537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b9dczz.lnjn4ynjbdzm4imt \
I0906 18:30:29.300280 16537 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:556ca23e9a7c272d5a050de938211552af48eef0910c4191e38aa373095b5858 \
I0906 18:30:29.300284 16537 kubeadm.go:310] --control-plane
I0906 18:30:29.300288 16537 kubeadm.go:310]
I0906 18:30:29.300292 16537 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0906 18:30:29.300296 16537 kubeadm.go:310]
I0906 18:30:29.300300 16537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b9dczz.lnjn4ynjbdzm4imt \
I0906 18:30:29.300304 16537 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:556ca23e9a7c272d5a050de938211552af48eef0910c4191e38aa373095b5858
I0906 18:30:29.303239 16537 cni.go:84] Creating CNI manager for ""
I0906 18:30:29.303269 16537 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0906 18:30:29.305166 16537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0906 18:30:29.306356 16537 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0906 18:30:29.317565 16537 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0906 18:30:29.317745 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1574200271 /etc/cni/net.d/1-k8s.conflist
I0906 18:30:29.327842 16537 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0906 18:30:29.327902 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:29.327955 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_06T18_30_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0906 18:30:29.336433 16537 ops.go:34] apiserver oom_adj: -16
I0906 18:30:29.399003 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:29.899646 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:30.399663 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:30.899995 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:31.399976 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:31.899941 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:32.400007 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:32.899483 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:33.399432 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:33.900003 16537 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 18:30:33.970200 16537 kubeadm.go:1113] duration metric: took 4.642345841s to wait for elevateKubeSystemPrivileges
I0906 18:30:33.970237 16537 kubeadm.go:394] duration metric: took 13.847284956s to StartCluster
I0906 18:30:33.970259 16537 settings.go:142] acquiring lock: {Name:mk11025cc1d7eecccc04d83d4495919089f6c151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:33.970324 16537 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19576-5859/kubeconfig
I0906 18:30:33.970963 16537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-5859/kubeconfig: {Name:mke818bbf85a8934c960f593fcb655d7c5e2890c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 18:30:33.971190 16537 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0906 18:30:33.971289 16537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0906 18:30:33.971380 16537 addons.go:69] Setting yakd=true in profile "minikube"
I0906 18:30:33.971397 16537 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0906 18:30:33.971409 16537 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0906 18:30:33.971432 16537 addons.go:69] Setting registry=true in profile "minikube"
I0906 18:30:33.971434 16537 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0906 18:30:33.971452 16537 addons.go:234] Setting addon registry=true in "minikube"
I0906 18:30:33.971452 16537 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0906 18:30:33.971465 16537 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0906 18:30:33.971473 16537 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0906 18:30:33.971480 16537 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:30:33.971488 16537 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0906 18:30:33.971490 16537 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0906 18:30:33.971497 16537 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0906 18:30:33.971508 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.971517 16537 addons.go:69] Setting metrics-server=true in profile "minikube"
I0906 18:30:33.971520 16537 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0906 18:30:33.971524 16537 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0906 18:30:33.971538 16537 addons.go:234] Setting addon metrics-server=true in "minikube"
I0906 18:30:33.971542 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.971554 16537 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0906 18:30:33.971569 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.971608 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.971423 16537 addons.go:234] Setting addon yakd=true in "minikube"
I0906 18:30:33.972145 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.971498 16537 addons.go:69] Setting helm-tiller=true in profile "minikube"
I0906 18:30:33.972237 16537 addons.go:234] Setting addon helm-tiller=true in "minikube"
I0906 18:30:33.972268 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.972824 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.972827 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.972843 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.972843 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.972849 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.972865 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.972879 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.971452 16537 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0906 18:30:33.972904 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.972982 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.973084 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.973098 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.973105 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.973119 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.973130 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.973146 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.971388 16537 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0906 18:30:33.973640 16537 out.go:177] * Configuring local host environment ...
I0906 18:30:33.973904 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.973919 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.973960 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.971482 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.974654 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.974675 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.974723 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.971485 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:33.975166 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:33.975195 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:33.975228 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0906 18:30:33.975290 16537 out.go:270] *
W0906 18:30:33.975306 16537 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0906 18:30:33.975318 16537 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0906 18:30:33.975331 16537 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0906 18:30:33.975339 16537 out.go:270] *
W0906 18:30:33.975402 16537 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0906 18:30:33.975411 16537 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
I0906 18:30:33.973647 16537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0906 18:30:33.971506 16537 addons.go:69] Setting gcp-auth=true in profile "minikube"
W0906 18:30:33.975795 16537 out.go:270] *
I0906 18:30:33.972879 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:33.971508 16537 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0906 18:30:33.971486 16537 addons.go:69] Setting volcano=true in profile "minikube"
W0906 18:30:33.976585 16537 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0906 18:30:33.976600 16537 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
I0906 18:30:33.991096 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:33.996743 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.001773 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.001889 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.003288 16537 mustload.go:65] Loading cluster: minikube
I0906 18:30:34.003341 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.003562 16537 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:30:34.004141 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.004157 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.004190 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.004391 16537 addons.go:234] Setting addon volcano=true in "minikube"
I0906 18:30:34.004456 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:34.005135 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.005150 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.005184 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.005504 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.005684 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:34.006368 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.006381 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.006406 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.006984 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.007004 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.007033 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.010419 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.010471 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
W0906 18:30:34.015570 16537 out.go:270] *
W0906 18:30:34.015607 16537 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0906 18:30:34.015651 16537 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0906 18:30:34.016822 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.016859 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.016897 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.017262 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.018115 16537 out.go:177] * Verifying Kubernetes components...
I0906 18:30:34.018267 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.018291 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.018991 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.019007 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.019040 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.019345 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.020062 16537 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0906 18:30:34.024587 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.025798 16537 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0906 18:30:34.026106 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.026894 16537 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0906 18:30:34.026932 16537 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0906 18:30:34.027071 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2084483678 /etc/kubernetes/addons/metrics-apiservice.yaml
I0906 18:30:34.028561 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.028606 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.031357 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.031378 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.031405 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.033623 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.034959 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.035027 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.041232 16537 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0906 18:30:34.042890 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.044806 16537 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0906 18:30:34.044865 16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0906 18:30:34.044895 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0906 18:30:34.044898 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0906 18:30:34.045013 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2243718550 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0906 18:30:34.045106 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2332487414 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0906 18:30:34.048752 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.048796 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.048809 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.053140 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.053204 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.054215 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.055696 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.058117 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.058141 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.063569 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.065090 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.065115 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.065533 16537 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0906 18:30:34.067049 16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0906 18:30:34.067088 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0906 18:30:34.067235 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876569157 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0906 18:30:34.070206 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.071314 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.071535 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.072067 16537 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0906 18:30:34.073425 16537 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0906 18:30:34.074427 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.074449 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.076177 16537 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0906 18:30:34.076328 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1849676009 /etc/kubernetes/addons/yakd-ns.yaml
I0906 18:30:34.076969 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.077023 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.080331 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.080658 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.080664 16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0906 18:30:34.080713 16537 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0906 18:30:34.080825 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube154303027 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0906 18:30:34.080876 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0906 18:30:34.082843 16537 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0906 18:30:34.084061 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0906 18:30:34.084087 16537 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0906 18:30:34.084092 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.084138 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.084245 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1049986199 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0906 18:30:34.086659 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.086701 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.090381 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.090435 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.092523 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.093613 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.093662 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.096440 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.096821 16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0906 18:30:34.096855 16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0906 18:30:34.096999 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2006494449 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0906 18:30:34.098538 16537 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
I0906 18:30:34.099965 16537 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0906 18:30:34.099998 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0906 18:30:34.100127 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube695944334 /etc/kubernetes/addons/deployment.yaml
I0906 18:30:34.104933 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.104978 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.107507 16537 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0906 18:30:34.109388 16537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0906 18:30:34.109415 16537 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0906 18:30:34.109540 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3059371932 /etc/kubernetes/addons/metrics-server-service.yaml
I0906 18:30:34.109938 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.109959 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.111528 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.111551 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.114904 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.115890 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0906 18:30:34.116337 16537 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0906 18:30:34.116371 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:34.116866 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.116883 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.116909 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.117113 16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0906 18:30:34.117139 16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0906 18:30:34.117280 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1917499766 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0906 18:30:34.117484 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.117498 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.117612 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.117633 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.118565 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.120281 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.120470 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.120536 16537 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0906 18:30:34.120574 16537 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0906 18:30:34.120704 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3675847591 /etc/kubernetes/addons/yakd-sa.yaml
I0906 18:30:34.120996 16537 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0906 18:30:34.121923 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.122034 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.122505 16537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:34.122524 16537 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0906 18:30:34.122532 16537 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:34.122567 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:34.124535 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.124687 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.124701 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:34.126351 16537 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0906 18:30:34.127335 16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0906 18:30:34.127370 16537 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0906 18:30:34.127487 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4117090496 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0906 18:30:34.129761 16537 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0906 18:30:34.131163 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0906 18:30:34.132145 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0906 18:30:34.133851 16537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0906 18:30:34.133873 16537 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0906 18:30:34.133960 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0906 18:30:34.133986 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2090160348 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0906 18:30:34.136921 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0906 18:30:34.138165 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0906 18:30:34.139949 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0906 18:30:34.141511 16537 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0906 18:30:34.142798 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0906 18:30:34.142851 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0906 18:30:34.143125 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube750790998 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0906 18:30:34.148310 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.148355 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.153263 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.155043 16537 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0906 18:30:34.156631 16537 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0906 18:30:34.156674 16537 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0906 18:30:34.156938 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube771928026 /etc/kubernetes/addons/ig-namespace.yaml
I0906 18:30:34.161031 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.161051 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.161368 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.161390 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.162530 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0906 18:30:34.162661 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2304560589 /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:34.166193 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.168192 16537 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0906 18:30:34.168235 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:34.168494 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.168946 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:34.168964 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:34.169002 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:34.170819 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.170841 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.171594 16537 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
I0906 18:30:34.174863 16537 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
I0906 18:30:34.176305 16537 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0906 18:30:34.176336 16537 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0906 18:30:34.176817 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3049052891 /etc/kubernetes/addons/yakd-crb.yaml
I0906 18:30:34.177852 16537 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
I0906 18:30:34.178736 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.179666 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.181290 16537 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0906 18:30:34.181574 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0906 18:30:34.181599 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0906 18:30:34.181800 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1375878742 /etc/kubernetes/addons/rbac-hostpath.yaml
I0906 18:30:34.182460 16537 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0906 18:30:34.182502 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
I0906 18:30:34.183260 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube243895838 /etc/kubernetes/addons/volcano-deployment.yaml
I0906 18:30:34.184409 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0906 18:30:34.186420 16537 out.go:177] - Using image docker.io/registry:2.8.3
I0906 18:30:34.187412 16537 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0906 18:30:34.187439 16537 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0906 18:30:34.187563 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1156456590 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0906 18:30:34.187930 16537 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0906 18:30:34.187957 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0906 18:30:34.188079 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2844190885 /etc/kubernetes/addons/registry-rc.yaml
I0906 18:30:34.202427 16537 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0906 18:30:34.202470 16537 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0906 18:30:34.202743 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2716063426 /etc/kubernetes/addons/yakd-svc.yaml
I0906 18:30:34.204633 16537 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0906 18:30:34.204668 16537 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0906 18:30:34.204791 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube572377492 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0906 18:30:34.210421 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0906 18:30:34.214209 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0906 18:30:34.214266 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0906 18:30:34.214476 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3837124974 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0906 18:30:34.229312 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0906 18:30:34.229347 16537 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0906 18:30:34.229467 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1617842210 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0906 18:30:34.238269 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0906 18:30:34.239584 16537 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0906 18:30:34.239613 16537 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0906 18:30:34.239728 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube947277937 /etc/kubernetes/addons/registry-svc.yaml
I0906 18:30:34.240210 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0906 18:30:34.240239 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0906 18:30:34.240344 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1650829709 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0906 18:30:34.244928 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.245072 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.258680 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.258713 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.261269 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0906 18:30:34.261300 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0906 18:30:34.261419 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2238946092 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0906 18:30:34.264067 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.264116 16537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:34.264129 16537 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0906 18:30:34.264141 16537 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:34.264182 16537 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:34.264487 16537 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0906 18:30:34.264512 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0906 18:30:34.264626 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube183100232 /etc/kubernetes/addons/yakd-dp.yaml
I0906 18:30:34.280914 16537 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0906 18:30:34.280949 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0906 18:30:34.281080 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3416468484 /etc/kubernetes/addons/registry-proxy.yaml
I0906 18:30:34.282449 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0906 18:30:34.283747 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:34.284235 16537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0906 18:30:34.284264 16537 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0906 18:30:34.284391 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube723907704 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0906 18:30:34.292370 16537 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:34.292403 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0906 18:30:34.292531 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3737032807 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:34.302679 16537 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0906 18:30:34.302866 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1539627577 /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:34.311269 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:34.311638 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:34.311721 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:34.316939 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0906 18:30:34.329889 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0906 18:30:34.338999 16537 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0906 18:30:34.339035 16537 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0906 18:30:34.339165 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2197815068 /etc/kubernetes/addons/ig-role.yaml
I0906 18:30:34.364285 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0906 18:30:34.364321 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0906 18:30:34.364464 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube867340576 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0906 18:30:34.375484 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:34.375513 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:34.387529 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:34.391091 16537 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0906 18:30:34.392709 16537 out.go:177] - Using image docker.io/busybox:stable
I0906 18:30:34.394657 16537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0906 18:30:34.394745 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0906 18:30:34.394915 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2454301497 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0906 18:30:34.414882 16537 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0906 18:30:34.414924 16537 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0906 18:30:34.415046 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1184914514 /etc/kubernetes/addons/ig-rolebinding.yaml
I0906 18:30:34.443978 16537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0906 18:30:34.444018 16537 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0906 18:30:34.444160 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4226892386 /etc/kubernetes/addons/ig-clusterrole.yaml
I0906 18:30:34.476443 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0906 18:30:34.476482 16537 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0906 18:30:34.476604 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2960343500 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0906 18:30:34.493627 16537 exec_runner.go:51] Run: sudo systemctl start kubelet
I0906 18:30:34.499251 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0906 18:30:34.566905 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0906 18:30:34.566948 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0906 18:30:34.567436 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1995683430 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0906 18:30:34.575793 16537 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
I0906 18:30:34.579172 16537 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
I0906 18:30:34.579193 16537 node_ready.go:38] duration metric: took 3.370369ms for node "ubuntu-20-agent-9" to be "Ready" ...
I0906 18:30:34.579205 16537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0906 18:30:34.596160 16537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace to be "Ready" ...
I0906 18:30:34.637283 16537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0906 18:30:34.637320 16537 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0906 18:30:34.637463 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1131400545 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0906 18:30:34.711621 16537 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0906 18:30:34.726204 16537 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0906 18:30:34.726250 16537 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0906 18:30:34.727197 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2396398083 /etc/kubernetes/addons/ig-crd.yaml
I0906 18:30:34.833711 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0906 18:30:34.833755 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0906 18:30:34.833925 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1621882382 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0906 18:30:34.907141 16537 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0906 18:30:34.907181 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0906 18:30:34.907385 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4073258762 /etc/kubernetes/addons/ig-daemonset.yaml
I0906 18:30:34.946319 16537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0906 18:30:34.946362 16537 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0906 18:30:34.946515 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2718334830 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0906 18:30:34.989043 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0906 18:30:34.999900 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0906 18:30:35.226872 16537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0906 18:30:35.261637 16537 addons.go:475] Verifying addon registry=true in "minikube"
I0906 18:30:35.266072 16537 out.go:177] * Verifying registry addon...
I0906 18:30:35.277165 16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0906 18:30:35.279185 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.040871516s)
I0906 18:30:35.281905 16537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0906 18:30:35.281934 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:35.299008 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.166818137s)
I0906 18:30:35.299044 16537 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0906 18:30:35.363437 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.178980842s)
I0906 18:30:35.580204 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.080913754s)
I0906 18:30:35.748641 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.466145892s)
I0906 18:30:35.757870 16537 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0906 18:30:35.781417 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:36.210568 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.89913039s)
W0906 18:30:36.210615 16537 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0906 18:30:36.210653 16537 retry.go:31] will retry after 191.026519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0906 18:30:36.285705 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:36.401892 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0906 18:30:36.608601 16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:36.781517 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:37.284671 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:37.418270 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.207813152s)
I0906 18:30:37.623011 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.623046717s)
I0906 18:30:37.623050 16537 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0906 18:30:37.624618 16537 out.go:177] * Verifying csi-hostpath-driver addon...
I0906 18:30:37.627325 16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0906 18:30:37.633668 16537 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0906 18:30:37.633696 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:37.781471 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:38.133361 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:38.281490 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:38.631347 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:38.781233 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:39.101212 16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:39.132352 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:39.198479 16537 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.796527775s)
I0906 18:30:39.280505 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:39.632573 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:39.781982 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:40.131808 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:40.281000 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:40.632869 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:40.781411 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:41.102458 16537 pod_ready.go:103] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status "Ready":"False"
I0906 18:30:41.131674 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:41.132891 16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0906 18:30:41.133039 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4117297751 /var/lib/minikube/google_application_credentials.json
I0906 18:30:41.145610 16537 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0906 18:30:41.145750 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3605993201 /var/lib/minikube/google_cloud_project
I0906 18:30:41.159210 16537 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0906 18:30:41.159266 16537 host.go:66] Checking if "minikube" exists ...
I0906 18:30:41.159965 16537 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0906 18:30:41.159987 16537 api_server.go:166] Checking apiserver status ...
I0906 18:30:41.160026 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:41.180201 16537 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17787/cgroup
I0906 18:30:41.196342 16537 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e"
I0906 18:30:41.196420 16537 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2d363300503931599c821c09469c1d44/a2029506d4d12d9233321b0fccee7f5514a6ccb913e3e589a5efd505b6a73e0e/freezer.state
I0906 18:30:41.207939 16537 api_server.go:204] freezer state: "THAWED"
I0906 18:30:41.207969 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:41.212142 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:41.212210 16537 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0906 18:30:41.215883 16537 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0906 18:30:41.217306 16537 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0906 18:30:41.218611 16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0906 18:30:41.218653 16537 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0906 18:30:41.218877 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1972949581 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0906 18:30:41.232029 16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0906 18:30:41.232150 16537 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0906 18:30:41.232286 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2754409336 /etc/kubernetes/addons/gcp-auth-service.yaml
I0906 18:30:41.244167 16537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0906 18:30:41.244194 16537 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0906 18:30:41.244302 16537 exec_runner.go:51] Run: sudo cp -a /tmp/minikube658414868 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0906 18:30:41.255928 16537 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0906 18:30:41.282052 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:41.632571 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:41.667180 16537 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0906 18:30:41.668704 16537 out.go:177] * Verifying gcp-auth addon...
I0906 18:30:41.671280 16537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0906 18:30:41.731286 16537 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0906 18:30:41.831935 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:42.132189 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:42.280601 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:42.631681 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:42.817481 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:43.103026 16537 pod_ready.go:98] pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.4 HostIPs:[{IP:10.154.0.4}] P
odIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-06 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:35 +0000 UTC,FinishedAt:2024-09-06 18:30:41 +0000 UTC,ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8 Started:0xc00136f0f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001734400} {Name:kube-api-access-pphq5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount Rea
dOnly:true RecursiveReadOnly:0xc001734410}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0906 18:30:43.103054 16537 pod_ready.go:82] duration metric: took 8.506813205s for pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace to be "Ready" ...
E0906 18:30:43.103067 16537 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-mmqcm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.154.0.
4 HostIPs:[{IP:10.154.0.4}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-06 18:30:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:35 +0000 UTC,FinishedAt:2024-09-06 18:30:41 +0000 UTC,ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://154a3b41d5fef27c2f10513a04bdb050783f3066027f560ca4f4f029fc6f2ad8 Started:0xc00136f0f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001734400} {Name:kube-api-access-pphq5 MountPath:/var/run/secrets/kub
ernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001734410}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I0906 18:30:43.103080 16537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace to be "Ready" ...
I0906 18:30:43.132118 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:43.280621 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:43.632947 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:43.793473 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:44.108529 16537 pod_ready.go:93] pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.108551 16537 pod_ready.go:82] duration metric: took 1.005461036s for pod "coredns-6f6b679f8f-qszb4" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.108560 16537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.112057 16537 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.112074 16537 pod_ready.go:82] duration metric: took 3.508277ms for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.112083 16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.115629 16537 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.115645 16537 pod_ready.go:82] duration metric: took 3.557233ms for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.115655 16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.119067 16537 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.119085 16537 pod_ready.go:82] duration metric: took 3.422424ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.119096 16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bpcdx" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.131298 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:44.281681 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:44.300150 16537 pod_ready.go:93] pod "kube-proxy-bpcdx" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.300169 16537 pod_ready.go:82] duration metric: took 181.066717ms for pod "kube-proxy-bpcdx" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.300179 16537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.631894 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:44.700871 16537 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:44.700892 16537 pod_ready.go:82] duration metric: took 400.707515ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.700904 16537 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace to be "Ready" ...
I0906 18:30:44.781033 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:45.100064 16537 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace has status "Ready":"True"
I0906 18:30:45.100097 16537 pod_ready.go:82] duration metric: took 399.185487ms for pod "nvidia-device-plugin-daemonset-992f8" in "kube-system" namespace to be "Ready" ...
I0906 18:30:45.100110 16537 pod_ready.go:39] duration metric: took 10.520892482s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0906 18:30:45.100135 16537 api_server.go:52] waiting for apiserver process to appear ...
I0906 18:30:45.100201 16537 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 18:30:45.116434 16537 api_server.go:72] duration metric: took 11.100738952s to wait for apiserver process to appear ...
I0906 18:30:45.116467 16537 api_server.go:88] waiting for apiserver healthz status ...
I0906 18:30:45.116489 16537 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0906 18:30:45.119934 16537 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0906 18:30:45.120729 16537 api_server.go:141] control plane version: v1.31.0
I0906 18:30:45.120748 16537 api_server.go:131] duration metric: took 4.274409ms to wait for apiserver health ...
I0906 18:30:45.120756 16537 system_pods.go:43] waiting for kube-system pods to appear ...
I0906 18:30:45.132003 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:45.280778 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:45.304970 16537 system_pods.go:59] 17 kube-system pods found
I0906 18:30:45.304999 16537 system_pods.go:61] "coredns-6f6b679f8f-qszb4" [d2454283-5b35-4985-9435-66f20cce3bef] Running
I0906 18:30:45.305008 16537 system_pods.go:61] "csi-hostpath-attacher-0" [1734d6de-d709-4f90-814d-4ff9e56e98ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0906 18:30:45.305014 16537 system_pods.go:61] "csi-hostpath-resizer-0" [11950ea3-6ebf-45da-977a-e9492c7efbf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0906 18:30:45.305023 16537 system_pods.go:61] "csi-hostpathplugin-vj2d9" [bc23e849-db9a-4bb7-b7ca-ddb2fef143b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0906 18:30:45.305028 16537 system_pods.go:61] "etcd-ubuntu-20-agent-9" [33046595-0146-42d7-bdfa-98907e349e30] Running
I0906 18:30:45.305032 16537 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [259f6f47-ac30-4ced-956e-7b622b01486e] Running
I0906 18:30:45.305037 16537 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [a95cb8b3-5f88-424e-ba5b-aa1515570850] Running
I0906 18:30:45.305040 16537 system_pods.go:61] "kube-proxy-bpcdx" [8e23e962-188d-400f-937e-be6e054a9f3a] Running
I0906 18:30:45.305045 16537 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [2e5e96d0-74b6-4dd5-853e-0a77544068a3] Running
I0906 18:30:45.305051 16537 system_pods.go:61] "metrics-server-84c5f94fbc-9xnx2" [8878e996-0060-4707-ae80-56a0f78c6d2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0906 18:30:45.305055 16537 system_pods.go:61] "nvidia-device-plugin-daemonset-992f8" [8b1041db-3dd1-4e54-b9f6-2006b5cfd9e8] Running
I0906 18:30:45.305060 16537 system_pods.go:61] "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0906 18:30:45.305065 16537 system_pods.go:61] "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0906 18:30:45.305077 16537 system_pods.go:61] "snapshot-controller-56fcc65765-6c22x" [559b5a4e-9454-4406-ab38-ac3d9cf3d36f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:45.305087 16537 system_pods.go:61] "snapshot-controller-56fcc65765-dlght" [9cca90da-19cc-47fb-a0af-622e17b6bf43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:45.305094 16537 system_pods.go:61] "storage-provisioner" [1a63b402-53de-4e52-8e77-0230c672502a] Running
I0906 18:30:45.305102 16537 system_pods.go:61] "tiller-deploy-b48cc5f79-qjchh" [683f5399-2294-48a0-b4b9-f13807b03c3a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0906 18:30:45.305111 16537 system_pods.go:74] duration metric: took 184.34887ms to wait for pod list to return data ...
I0906 18:30:45.305125 16537 default_sa.go:34] waiting for default service account to be created ...
I0906 18:30:45.500311 16537 default_sa.go:45] found service account: "default"
I0906 18:30:45.500337 16537 default_sa.go:55] duration metric: took 195.205319ms for default service account to be created ...
I0906 18:30:45.500349 16537 system_pods.go:116] waiting for k8s-apps to be running ...
I0906 18:30:45.632739 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:45.705860 16537 system_pods.go:86] 17 kube-system pods found
I0906 18:30:45.705967 16537 system_pods.go:89] "coredns-6f6b679f8f-qszb4" [d2454283-5b35-4985-9435-66f20cce3bef] Running
I0906 18:30:45.705992 16537 system_pods.go:89] "csi-hostpath-attacher-0" [1734d6de-d709-4f90-814d-4ff9e56e98ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0906 18:30:45.705999 16537 system_pods.go:89] "csi-hostpath-resizer-0" [11950ea3-6ebf-45da-977a-e9492c7efbf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0906 18:30:45.706006 16537 system_pods.go:89] "csi-hostpathplugin-vj2d9" [bc23e849-db9a-4bb7-b7ca-ddb2fef143b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0906 18:30:45.706020 16537 system_pods.go:89] "etcd-ubuntu-20-agent-9" [33046595-0146-42d7-bdfa-98907e349e30] Running
I0906 18:30:45.706026 16537 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [259f6f47-ac30-4ced-956e-7b622b01486e] Running
I0906 18:30:45.706032 16537 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [a95cb8b3-5f88-424e-ba5b-aa1515570850] Running
I0906 18:30:45.706036 16537 system_pods.go:89] "kube-proxy-bpcdx" [8e23e962-188d-400f-937e-be6e054a9f3a] Running
I0906 18:30:45.706040 16537 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [2e5e96d0-74b6-4dd5-853e-0a77544068a3] Running
I0906 18:30:45.706045 16537 system_pods.go:89] "metrics-server-84c5f94fbc-9xnx2" [8878e996-0060-4707-ae80-56a0f78c6d2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0906 18:30:45.706049 16537 system_pods.go:89] "nvidia-device-plugin-daemonset-992f8" [8b1041db-3dd1-4e54-b9f6-2006b5cfd9e8] Running
I0906 18:30:45.706056 16537 system_pods.go:89] "registry-6fb4cdfc84-v2hm7" [e6f429ae-9168-48c6-8e02-968ce47780ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0906 18:30:45.706062 16537 system_pods.go:89] "registry-proxy-lp46v" [6d764de2-9241-4f56-9564-ae56133efa57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0906 18:30:45.706075 16537 system_pods.go:89] "snapshot-controller-56fcc65765-6c22x" [559b5a4e-9454-4406-ab38-ac3d9cf3d36f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:45.706081 16537 system_pods.go:89] "snapshot-controller-56fcc65765-dlght" [9cca90da-19cc-47fb-a0af-622e17b6bf43] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0906 18:30:45.706097 16537 system_pods.go:89] "storage-provisioner" [1a63b402-53de-4e52-8e77-0230c672502a] Running
I0906 18:30:45.706113 16537 system_pods.go:89] "tiller-deploy-b48cc5f79-qjchh" [683f5399-2294-48a0-b4b9-f13807b03c3a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0906 18:30:45.706123 16537 system_pods.go:126] duration metric: took 205.768721ms to wait for k8s-apps to be running ...
I0906 18:30:45.706133 16537 system_svc.go:44] waiting for kubelet service to be running ....
I0906 18:30:45.706173 16537 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0906 18:30:45.720877 16537 system_svc.go:56] duration metric: took 14.733045ms WaitForService to wait for kubelet
I0906 18:30:45.720907 16537 kubeadm.go:582] duration metric: took 11.70521539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0906 18:30:45.720928 16537 node_conditions.go:102] verifying NodePressure condition ...
I0906 18:30:45.780728 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:45.900159 16537 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0906 18:30:45.900184 16537 node_conditions.go:123] node cpu capacity is 8
I0906 18:30:45.900195 16537 node_conditions.go:105] duration metric: took 179.262466ms to run NodePressure ...
I0906 18:30:45.900205 16537 start.go:241] waiting for startup goroutines ...
I0906 18:30:46.131671 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:46.281155 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:46.634493 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:46.781907 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:47.132256 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:47.280753 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:47.632320 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:47.781139 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:48.132335 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:48.281234 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:48.650167 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:48.851848 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:49.131131 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:49.280540 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:49.632739 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:49.780854 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:50.131634 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:50.280907 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:50.631720 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:50.780374 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:51.131906 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:51.280836 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:51.633011 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:51.781740 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:52.133432 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:52.281146 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:52.632269 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:52.781364 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:53.132651 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:53.314521 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:53.632561 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:53.781014 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:54.132063 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:54.280784 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:54.631816 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:54.780901 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:55.131565 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:55.280780 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:55.631597 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:55.780852 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:56.131345 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:56.280568 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:56.632951 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:56.833590 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:57.133221 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:57.281305 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:57.631542 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:57.780668 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:58.132997 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:58.280706 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:58.632005 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:58.781079 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:59.132274 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:59.280823 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:30:59.631912 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:30:59.780748 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:00.131059 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:00.280622 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0906 18:31:00.631713 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:00.780760 16537 kapi.go:107] duration metric: took 25.503596012s to wait for kubernetes.io/minikube-addons=registry ...
I0906 18:31:01.132962 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:01.631568 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:02.132718 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:02.631781 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:03.131889 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:03.631583 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:04.131428 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:04.631994 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:05.132349 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:05.632491 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:06.131360 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:06.631800 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:07.132267 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:07.632030 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:08.132673 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:08.632551 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:09.131606 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:09.632354 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:10.132139 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:10.632560 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:11.131233 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:11.631172 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:12.131879 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:12.631748 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:13.131552 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:13.632819 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:14.131812 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:14.632351 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:15.131838 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:15.631985 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:16.131713 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:16.632422 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:17.131820 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:17.632622 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:18.132723 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:18.632698 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:19.131433 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:19.632440 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:20.132183 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:20.632883 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:21.131836 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:21.631808 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0906 18:31:22.132365 16537 kapi.go:107] duration metric: took 44.505042987s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0906 18:32:03.675254 16537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0906 18:32:03.675278 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:04.174612 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:04.674317 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:05.174241 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:05.674391 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:06.174498 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:06.674882 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:07.175050 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:07.674996 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:08.174912 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:08.674806 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:09.175022 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:09.675035 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:10.174854 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:10.675101 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:11.174985 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:11.675536 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:12.174427 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:12.674452 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:13.174533 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:13.674752 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:14.174885 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:14.674691 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:15.195985 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:15.674436 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:16.174611 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:16.675061 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:17.174589 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:17.674270 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:18.174304 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:18.674706 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:19.174682 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:19.675405 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:20.173993 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:20.675601 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:21.174499 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:21.674929 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:22.174795 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:22.674466 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:23.174309 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:23.674395 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:24.174455 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:24.674512 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:25.174416 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:25.674615 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:26.174926 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:26.674163 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:27.174245 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:27.674613 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:28.174905 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:28.675471 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:29.176051 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:29.675505 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:30.174329 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:30.674365 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:31.174450 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:31.674947 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:32.174572 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:32.674173 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:33.175001 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:33.675022 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:34.174166 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:34.675035 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:35.174485 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:35.674579 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:36.174702 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:36.675070 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:37.174457 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:37.674870 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:38.175256 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:38.674495 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:39.174406 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:39.674525 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:40.174338 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:40.674043 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:41.175403 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:41.674761 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:42.174476 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:42.674362 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:43.174492 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:43.674890 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:44.174997 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:44.674743 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:45.174711 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:45.674531 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:46.175188 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:46.674242 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:47.174259 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:47.673996 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:48.175456 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:48.674266 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:49.173913 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:49.675495 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:50.174917 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:50.675013 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:51.174902 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:51.675649 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:52.174573 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:52.674949 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:53.174763 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:53.675014 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:54.174980 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:54.677768 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:55.174510 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:55.674480 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:56.174895 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:56.674286 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:57.174332 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:57.674659 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:58.175030 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:58.675369 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:59.174377 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:32:59.674634 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:00.174734 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:00.674736 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:01.174432 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:01.675015 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:02.174693 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:02.675162 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:03.174196 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:03.675158 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:04.174832 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:04.674847 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:05.174542 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:05.675003 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:06.174948 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:06.675579 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:07.174481 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:07.675250 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:08.174971 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:08.674659 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:09.174871 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:09.675102 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:10.175060 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:10.674995 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:11.175206 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:11.674281 16537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0906 18:33:12.175025 16537 kapi.go:107] duration metric: took 2m30.503741494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0906 18:33:12.177001 16537 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0906 18:33:12.178607 16537 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0906 18:33:12.179971 16537 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0906 18:33:12.181533 16537 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, helm-tiller, metrics-server, storage-provisioner, storage-provisioner-rancher, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0906 18:33:12.182871 16537 addons.go:510] duration metric: took 2m38.211585206s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass helm-tiller metrics-server storage-provisioner storage-provisioner-rancher yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0906 18:33:12.182921 16537 start.go:246] waiting for cluster config update ...
I0906 18:33:12.182942 16537 start.go:255] writing updated cluster config ...
I0906 18:33:12.183213 16537 exec_runner.go:51] Run: rm -f paused
I0906 18:33:12.226992 16537 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
I0906 18:33:12.228930 16537 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Sat 2024-08-31 10:18:00 UTC, end at Fri 2024-09-06 18:43:05 UTC. --
Sep 06 18:36:54 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:36:54.442082840Z" level=info msg="ignoring event" container=45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:39:32 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:39:32.000567595Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 06 18:39:32 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:39:32.002987623Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
Sep 06 18:41:55 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:41:55Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.224981505Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.224981968Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.226929562Z" level=error msg="Error running exec 4c2d34e30d2301c2234aa87ab1c3af7254beea053a6a35eb74e0882a5133546b in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.280523261Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.280548189Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.282675263Z" level=error msg="Error running exec b661f549806c500daf7d46af17da2b156c2e3a329159678119f4b7f9822ecbd9 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
Sep 06 18:41:57 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:41:57.453734589Z" level=info msg="ignoring event" container=13d13599bb3bc1088344955566b325e55df92da6557beeaf56942782bccd3e7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:42:00 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:00Z" level=error msg="error getting RW layer size for container ID '45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3': Error response from daemon: No such container: 45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3"
Sep 06 18:42:00 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:00Z" level=error msg="Set backoffDuration to : 1m0s for container ID '45e8b3e6edfd9c9e36e4f61d4c189f85a95541a66a7dfe092c6dfbeed86d61e3'"
Sep 06 18:42:04 ubuntu-20-agent-9 cri-dockerd[17083]: time="2024-09-06T18:42:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef97bb9f2d83a276492bfe7d078016390ee9a5387f9e635e646007a22fa9b775/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 06 18:42:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:04.943298585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:42:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:04.945588707Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:42:20 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:20.004859184Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:42:20 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:20.007470736Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:42:47 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:47.003878533Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:42:47 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:42:47.006327360Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.392466865Z" level=info msg="ignoring event" container=ef97bb9f2d83a276492bfe7d078016390ee9a5387f9e635e646007a22fa9b775 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.651237093Z" level=info msg="ignoring event" container=cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.719123085Z" level=info msg="ignoring event" container=f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.788400901Z" level=info msg="ignoring event" container=b2aa1645ab57f3a3cdf56b5f85885f48894a6c8cd15be305624149805f1fce7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 06 18:43:04 ubuntu-20-agent-9 dockerd[16754]: time="2024-09-06T18:43:04.891750101Z" level=info msg="ignoring event" container=8f24dc2a208c77aeeaa923aa38dcc92211f8c23d1c7e9794befbf58731566f80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
13d13599bb3bc ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec About a minute ago Exited gadget 7 fc0c10c9db065 gadget-jbvsd
9e901f66a10e4 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 216533415c82b gcp-auth-89d5ffd79-ppkkk
43a580f7d3080 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 11 minutes ago Running csi-snapshotter 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
61e25d0882034 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 11 minutes ago Running csi-provisioner 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
6fb4e180217f1 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 11 minutes ago Running liveness-probe 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
43a2a792132a1 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 11 minutes ago Running hostpath 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
254e95b2101e2 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 11 minutes ago Running node-driver-registrar 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
1a6dd9016ca8e registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 11 minutes ago Running csi-resizer 0 e84cdcc4a4a03 csi-hostpath-resizer-0
baa9128400027 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 11 minutes ago Running csi-external-health-monitor-controller 0 ce1af66ffb7e1 csi-hostpathplugin-vj2d9
1c73fb968274b registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 11 minutes ago Running csi-attacher 0 8f560ac8f7195 csi-hostpath-attacher-0
381f8618a360d registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 12 minutes ago Running volume-snapshot-controller 0 290bcb16f425b snapshot-controller-56fcc65765-dlght
40ab53c29a72b registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 12 minutes ago Running volume-snapshot-controller 0 d56a668f399a0 snapshot-controller-56fcc65765-6c22x
d588c3dde7304 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 12 minutes ago Running local-path-provisioner 0 36539825ade8b local-path-provisioner-86d989889c-slpgf
63ab37da08e18 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 12 minutes ago Running yakd 0 a03de44fd66e6 yakd-dashboard-67d98fc6b-k9hqw
f6083c47174f5 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 12 minutes ago Exited registry-proxy 0 8f24dc2a208c7 registry-proxy-lp46v
875be12584b5c ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 12 minutes ago Running tiller 0 018f5218c7d09 tiller-deploy-b48cc5f79-qjchh
cc64c892d533c registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d 12 minutes ago Exited registry 0 b2aa1645ab57f registry-6fb4cdfc84-v2hm7
e12546717979e registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 12 minutes ago Running metrics-server 0 07510fb077e82 metrics-server-84c5f94fbc-9xnx2
db791af5a2bf3 gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc 12 minutes ago Running cloud-spanner-emulator 0 7e82da2feb472 cloud-spanner-emulator-769b77f747-q2pqv
89bd2292a8f3a nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 12 minutes ago Running nvidia-device-plugin-ctr 0 1576fc1afcd47 nvidia-device-plugin-daemonset-992f8
a203a84e71cb9 6e38f40d628db 12 minutes ago Running storage-provisioner 0 16d1656f00ae7 storage-provisioner
c23f2594a3e7f cbb01a7bd410d 12 minutes ago Running coredns 0 76d7b635d5c52 coredns-6f6b679f8f-qszb4
90bc0c9443152 ad83b2ca7b09e 12 minutes ago Running kube-proxy 0 49af432d00aae kube-proxy-bpcdx
ed8ba4cf73612 045733566833c 12 minutes ago Running kube-controller-manager 0 033361b24f009 kube-controller-manager-ubuntu-20-agent-9
e43fdf744ae9d 1766f54c897f0 12 minutes ago Running kube-scheduler 0 6e8074a8346e1 kube-scheduler-ubuntu-20-agent-9
a2029506d4d12 604f5db92eaa8 12 minutes ago Running kube-apiserver 0 bcf316f77232e kube-apiserver-ubuntu-20-agent-9
36661887fea2f 2e96e5913fc06 12 minutes ago Running etcd 0 7149a3a1b832b etcd-ubuntu-20-agent-9
==> coredns [c23f2594a3e7] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:37563 - 12641 "HINFO IN 774206148356767685.291074717509105252. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.014129384s
[INFO] 10.244.0.24:52539 - 50572 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291492s
[INFO] 10.244.0.24:57143 - 27585 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363221s
[INFO] 10.244.0.24:33719 - 16169 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133656s
[INFO] 10.244.0.24:33286 - 19949 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151096s
[INFO] 10.244.0.24:44184 - 31175 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072861s
[INFO] 10.244.0.24:57925 - 47331 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118642s
[INFO] 10.244.0.24:50953 - 11654 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003799887s
[INFO] 10.244.0.24:41444 - 44307 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.003869916s
[INFO] 10.244.0.24:43269 - 30106 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003416574s
[INFO] 10.244.0.24:49708 - 8189 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003728298s
[INFO] 10.244.0.24:40200 - 19637 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004289346s
[INFO] 10.244.0.24:59273 - 19615 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004343273s
[INFO] 10.244.0.24:54328 - 52059 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002801115s
[INFO] 10.244.0.24:42042 - 39807 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003101451s
==> describe nodes <==
Name: ubuntu-20-agent-9
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-9
kubernetes.io/os=linux
minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_06T18_30_29_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-9
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 06 Sep 2024 18:30:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-9
AcquireTime: <unset>
RenewTime: Fri, 06 Sep 2024 18:43:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 06 Sep 2024 18:39:11 +0000 Fri, 06 Sep 2024 18:30:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 06 Sep 2024 18:39:11 +0000 Fri, 06 Sep 2024 18:30:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 06 Sep 2024 18:39:11 +0000 Fri, 06 Sep 2024 18:30:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 06 Sep 2024 18:39:11 +0000 Fri, 06 Sep 2024 18:30:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.154.0.4
Hostname: ubuntu-20-agent-9
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859320Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 4894487b-7b30-e033-3a9d-c6f45b6c4cf8
Boot ID: 3a37642b-ebc9-482a-807f-9b9abd72965a
Kernel Version: 5.15.0-1067-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.2.0
Kubelet Version: v1.31.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m14s
default cloud-spanner-emulator-769b77f747-q2pqv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gadget gadget-jbvsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
gcp-auth gcp-auth-89d5ffd79-ppkkk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system coredns-6f6b679f8f-qszb4 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 12m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system csi-hostpathplugin-vj2d9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system etcd-ubuntu-20-agent-9 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 12m
kube-system kube-apiserver-ubuntu-20-agent-9 250m (3%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-controller-manager-ubuntu-20-agent-9 200m (2%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-bpcdx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-ubuntu-20-agent-9 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system metrics-server-84c5f94fbc-9xnx2 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 12m
kube-system nvidia-device-plugin-daemonset-992f8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-6c22x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system snapshot-controller-56fcc65765-dlght 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system tiller-deploy-b48cc5f79-qjchh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
local-path-storage local-path-provisioner-86d989889c-slpgf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
yakd-dashboard yakd-dashboard-67d98fc6b-k9hqw 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 12m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 12m kubelet Starting kubelet.
Warning CgroupV1 12m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
Normal RegisteredNode 12m node-controller Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 2a f7 67 48 98 08 06
[ +1.194419] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 b9 97 f7 84 25 08 06
[ +0.046903] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000019] ll header: 00000000: ff ff ff ff ff ff 4e 29 68 17 31 66 08 06
[ +2.804668] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 76 3a e8 8a 9b 08 06
[ +2.016476] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 b7 4a c6 b1 cf 08 06
[ +2.300816] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 54 aa 8e 8a dd 08 06
[ +5.305539] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 b1 d6 59 f5 47 08 06
[ +0.422238] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 26 be 97 a2 7e 08 06
[ +0.139849] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 33 22 6f 68 db 08 06
[Sep 6 18:32] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 94 c5 69 77 17 08 06
[ +0.041917] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff de 21 66 a4 1e 92 08 06
[Sep 6 18:33] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 2a 0b 82 05 5c 84 08 06
[ +0.000447] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff b2 62 aa 17 2e b3 08 06
==> etcd [36661887fea2] <==
{"level":"info","ts":"2024-09-06T18:30:24.810584Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-09-06T18:30:25.000402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a is starting a new election at term 1"}
{"level":"info","ts":"2024-09-06T18:30:25.000461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-06T18:30:25.000482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
{"level":"info","ts":"2024-09-06T18:30:25.000496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
{"level":"info","ts":"2024-09-06T18:30:25.000504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-06T18:30:25.000514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
{"level":"info","ts":"2024-09-06T18:30:25.000524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-06T18:30:25.001467Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-06T18:30:25.001624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-06T18:30:25.001637Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-06T18:30:25.001656Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-06T18:30:25.001893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-06T18:30:25.001916Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-06T18:30:25.002284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-06T18:30:25.002356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-06T18:30:25.002381Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-06T18:30:25.002884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-06T18:30:25.002930Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-06T18:30:25.003741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-06T18:30:25.004039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
{"level":"warn","ts":"2024-09-06T18:31:02.991362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.00846ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11189916514919458939 > lease_revoke:<id:1b4a91c89a0d2775>","response":"size:27"}
{"level":"info","ts":"2024-09-06T18:40:25.503353Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1720}
{"level":"info","ts":"2024-09-06T18:40:25.528219Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1720,"took":"24.363894ms","hash":2879840899,"current-db-size-bytes":8638464,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4452352,"current-db-size-in-use":"4.5 MB"}
{"level":"info","ts":"2024-09-06T18:40:25.528279Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2879840899,"revision":1720,"compact-revision":-1}
==> gcp-auth [9e901f66a10e] <==
2024/09/06 18:33:11 GCP Auth Webhook started!
2024/09/06 18:33:27 Ready to marshal response ...
2024/09/06 18:33:27 Ready to write response ...
2024/09/06 18:33:28 Ready to marshal response ...
2024/09/06 18:33:28 Ready to write response ...
2024/09/06 18:33:51 Ready to marshal response ...
2024/09/06 18:33:51 Ready to write response ...
2024/09/06 18:33:51 Ready to marshal response ...
2024/09/06 18:33:51 Ready to write response ...
2024/09/06 18:33:51 Ready to marshal response ...
2024/09/06 18:33:51 Ready to write response ...
2024/09/06 18:42:04 Ready to marshal response ...
2024/09/06 18:42:04 Ready to write response ...
==> kernel <==
18:43:05 up 25 min, 0 users, load average: 0.62, 0.53, 0.42
Linux ubuntu-20-agent-9 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [a2029506d4d1] <==
W0906 18:32:03.644740 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
E0906 18:32:03.644775 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
W0906 18:32:44.696688 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
E0906 18:32:44.696723 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
W0906 18:32:44.707950 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.105.222:443: connect: connection refused
E0906 18:32:44.707984 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.105.222:443: connect: connection refused" logger="UnhandledError"
I0906 18:33:27.506439 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0906 18:33:27.524321 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0906 18:33:41.900369 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0906 18:33:41.910576 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0906 18:33:42.031470 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0906 18:33:42.033286 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0906 18:33:42.058926 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
E0906 18:33:42.064169 1 watch.go:250] "Unhandled Error" err="client disconnected" logger="UnhandledError"
I0906 18:33:42.181636 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0906 18:33:42.213035 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0906 18:33:42.226073 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0906 18:33:42.278681 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0906 18:33:42.932416 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0906 18:33:43.111328 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0906 18:33:43.182740 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0906 18:33:43.182822 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0906 18:33:43.182754 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0906 18:33:43.279648 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0906 18:33:43.460455 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [ed8ba4cf7361] <==
W0906 18:42:04.665852 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:04.665901 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:06.103775 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:06.103823 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:11.211356 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:11.211398 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:22.811720 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:22.811773 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:23.776903 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:23.776946 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:31.253293 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:31.253333 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:45.028543 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:45.028584 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:50.457768 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:50.457819 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:53.213376 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:53.213420 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:55.674120 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:55.674167 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:57.055487 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:57.055527 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0906 18:42:59.910064 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0906 18:42:59.910104 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0906 18:43:04.616163 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="7.958µs"
==> kube-proxy [90bc0c944315] <==
I0906 18:30:35.030348 1 server_linux.go:66] "Using iptables proxy"
I0906 18:30:35.251292 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
E0906 18:30:35.251362 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0906 18:30:35.368293 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0906 18:30:35.368361 1 server_linux.go:169] "Using iptables Proxier"
I0906 18:30:35.372075 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0906 18:30:35.372461 1 server.go:483] "Version info" version="v1.31.0"
I0906 18:30:35.372488 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0906 18:30:35.374854 1 config.go:197] "Starting service config controller"
I0906 18:30:35.374894 1 shared_informer.go:313] Waiting for caches to sync for service config
I0906 18:30:35.374928 1 config.go:104] "Starting endpoint slice config controller"
I0906 18:30:35.374935 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0906 18:30:35.375547 1 config.go:326] "Starting node config controller"
I0906 18:30:35.375557 1 shared_informer.go:313] Waiting for caches to sync for node config
I0906 18:30:35.479794 1 shared_informer.go:320] Caches are synced for node config
I0906 18:30:35.479836 1 shared_informer.go:320] Caches are synced for service config
I0906 18:30:35.479879 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [e43fdf744ae9] <==
W0906 18:30:26.357136 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0906 18:30:26.357434 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:26.357441 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0906 18:30:26.357122 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:26.357503 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0906 18:30:26.357468 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.224342 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0906 18:30:27.224385 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.268121 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0906 18:30:27.268159 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.356890 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0906 18:30:27.356942 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.397211 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0906 18:30:27.397255 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.411638 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0906 18:30:27.411682 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.481244 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:27.481297 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.487576 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0906 18:30:27.487651 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.502109 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0906 18:30:27.502151 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0906 18:30:27.534546 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0906 18:30:27.534583 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0906 18:30:27.854543 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sat 2024-08-31 10:18:00 UTC, end at Fri 2024-09-06 18:43:05 UTC. --
Sep 06 18:42:50 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:50.850958 17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b1ded35-01c2-48b1-88e5-1582b38ae658"
Sep 06 18:42:56 ubuntu-20-agent-9 kubelet[17964]: I0906 18:42:56.848473 17964 scope.go:117] "RemoveContainer" containerID="13d13599bb3bc1088344955566b325e55df92da6557beeaf56942782bccd3e7e"
Sep 06 18:42:56 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:56.848658 17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jbvsd_gadget(3fab1905-e39d-4c14-9987-dedecd9e7b17)\"" pod="gadget/gadget-jbvsd" podUID="3fab1905-e39d-4c14-9987-dedecd9e7b17"
Sep 06 18:42:59 ubuntu-20-agent-9 kubelet[17964]: E0906 18:42:59.851330 17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="4192c3b4-f319-4822-b46f-6e4f1d417e49"
Sep 06 18:43:01 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:01.850546 17964 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b1ded35-01c2-48b1-88e5-1582b38ae658"
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599713 17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds\") pod \"4192c3b4-f319-4822-b46f-6e4f1d417e49\" (UID: \"4192c3b4-f319-4822-b46f-6e4f1d417e49\") "
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599771 17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg2rg\" (UniqueName: \"kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg\") pod \"4192c3b4-f319-4822-b46f-6e4f1d417e49\" (UID: \"4192c3b4-f319-4822-b46f-6e4f1d417e49\") "
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.599840 17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4192c3b4-f319-4822-b46f-6e4f1d417e49" (UID: "4192c3b4-f319-4822-b46f-6e4f1d417e49"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.601707 17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg" (OuterVolumeSpecName: "kube-api-access-pg2rg") pod "4192c3b4-f319-4822-b46f-6e4f1d417e49" (UID: "4192c3b4-f319-4822-b46f-6e4f1d417e49"). InnerVolumeSpecName "kube-api-access-pg2rg". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.703121 17964 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4192c3b4-f319-4822-b46f-6e4f1d417e49-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 06 18:43:04 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:04.703152 17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pg2rg\" (UniqueName: \"kubernetes.io/projected/4192c3b4-f319-4822-b46f-6e4f1d417e49-kube-api-access-pg2rg\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.007988 17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s7cl\" (UniqueName: \"kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl\") pod \"e6f429ae-9168-48c6-8e02-968ce47780ae\" (UID: \"e6f429ae-9168-48c6-8e02-968ce47780ae\") "
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.010563 17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl" (OuterVolumeSpecName: "kube-api-access-9s7cl") pod "e6f429ae-9168-48c6-8e02-968ce47780ae" (UID: "e6f429ae-9168-48c6-8e02-968ce47780ae"). InnerVolumeSpecName "kube-api-access-9s7cl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.108913 17964 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6df5\" (UniqueName: \"kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5\") pod \"6d764de2-9241-4f56-9564-ae56133efa57\" (UID: \"6d764de2-9241-4f56-9564-ae56133efa57\") "
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.109162 17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9s7cl\" (UniqueName: \"kubernetes.io/projected/e6f429ae-9168-48c6-8e02-968ce47780ae-kube-api-access-9s7cl\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.110854 17964 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5" (OuterVolumeSpecName: "kube-api-access-h6df5") pod "6d764de2-9241-4f56-9564-ae56133efa57" (UID: "6d764de2-9241-4f56-9564-ae56133efa57"). InnerVolumeSpecName "kube-api-access-h6df5". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.209556 17964 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h6df5\" (UniqueName: \"kubernetes.io/projected/6d764de2-9241-4f56-9564-ae56133efa57-kube-api-access-h6df5\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.217902 17964 scope.go:117] "RemoveContainer" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237003 17964 scope.go:117] "RemoveContainer" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:05.237820 17964 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9" containerID="f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237857 17964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"} err="failed to get container status \"f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9\": rpc error: code = Unknown desc = Error response from daemon: No such container: f6083c47174f5a63b75dbc183903cb0eb42e241b7fb5206a729a18f92f58eff9"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.237887 17964 scope.go:117] "RemoveContainer" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.262217 17964 scope.go:117] "RemoveContainer" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: E0906 18:43:05.263147 17964 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3" containerID="cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
Sep 06 18:43:05 ubuntu-20-agent-9 kubelet[17964]: I0906 18:43:05.263192 17964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"} err="failed to get container status \"cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3\": rpc error: code = Unknown desc = Error response from daemon: No such container: cc64c892d533ced1941dc213902832d72557f928d728c59309ee9ad47d79a6c3"
==> storage-provisioner [a203a84e71cb] <==
I0906 18:30:36.635753 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0906 18:30:36.654380 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0906 18:30:36.654454 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0906 18:30:36.662742 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0906 18:30:36.663810 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7!
I0906 18:30:36.667248 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27d28267-8c43-487c-9686-90da8e1c914d", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7 became leader
I0906 18:30:36.764617 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_0f044dc0-9e66-4411-ba4e-a51be2401fa7!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-9/10.154.0.4
Start Time: Fri, 06 Sep 2024 18:33:51 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.26
IPs:
IP: 10.244.0.26
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5qkr9 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-5qkr9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-9
Normal Pulling 7m50s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m49s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m49s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m24s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m11s (x20 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.80s)