=== RUN TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.86403ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004008799s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004377266s
addons_test.go:338: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082712911s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
error: timed out waiting for the condition
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/09/27 00:27:00 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:386: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.31.1 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | --download-only -p | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:41241 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:15 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:17 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=none --bootstrapper=kubeadm | | | | | |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 27 Sep 24 00:17 UTC | 27 Sep 24 00:17 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| ip | minikube ip | minikube | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/09/27 00:15:26
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.23.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0927 00:15:26.056754 127143 out.go:345] Setting OutFile to fd 1 ...
I0927 00:15:26.056930 127143 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:15:26.056944 127143 out.go:358] Setting ErrFile to fd 2...
I0927 00:15:26.056949 127143 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:15:26.057165 127143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
I0927 00:15:26.057802 127143 out.go:352] Setting JSON to false
I0927 00:15:26.058645 127143 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7064,"bootTime":1727389062,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0927 00:15:26.058747 127143 start.go:139] virtualization: kvm guest
I0927 00:15:26.060833 127143 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
W0927 00:15:26.062248 127143 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
I0927 00:15:26.062283 127143 out.go:177] - MINIKUBE_LOCATION=19711
I0927 00:15:26.062297 127143 notify.go:220] Checking for updates...
I0927 00:15:26.064701 127143 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0927 00:15:26.065968 127143 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
I0927 00:15:26.067367 127143 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
I0927 00:15:26.068634 127143 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0927 00:15:26.070226 127143 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0927 00:15:26.071773 127143 driver.go:394] Setting default libvirt URI to qemu:///system
I0927 00:15:26.082582 127143 out.go:177] * Using the none driver based on user configuration
I0927 00:15:26.083719 127143 start.go:297] selected driver: none
I0927 00:15:26.083738 127143 start.go:901] validating driver "none" against <nil>
I0927 00:15:26.083764 127143 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0927 00:15:26.083827 127143 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0927 00:15:26.084295 127143 out.go:270] ! The 'none' driver does not respect the --memory flag
I0927 00:15:26.085103 127143 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0927 00:15:26.085467 127143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0927 00:15:26.085514 127143 cni.go:84] Creating CNI manager for ""
I0927 00:15:26.085589 127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:15:26.085607 127143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0927 00:15:26.085671 127143 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0927 00:15:26.086983 127143 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0927 00:15:26.088716 127143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json ...
I0927 00:15:26.088767 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json: {Name:mk699d4bc5cb4218ce6babe138df72e9f0ac852c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:26.088943 127143 start.go:360] acquireMachinesLock for minikube: {Name:mk0c3282f0caac62dc7b9c8c9c6d629924f62b3c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0927 00:15:26.089001 127143 start.go:364] duration metric: took 23.983µs to acquireMachinesLock for "minikube"
I0927 00:15:26.089021 127143 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0927 00:15:26.089108 127143 start.go:125] createHost starting for "" (driver="none")
I0927 00:15:26.090506 127143 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0927 00:15:26.091525 127143 exec_runner.go:51] Run: systemctl --version
I0927 00:15:26.094078 127143 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0927 00:15:26.094142 127143 client.go:168] LocalClient.Create starting
I0927 00:15:26.094257 127143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca.pem
I0927 00:15:26.094302 127143 main.go:141] libmachine: Decoding PEM data...
I0927 00:15:26.094329 127143 main.go:141] libmachine: Parsing certificate...
I0927 00:15:26.094408 127143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-116460/.minikube/certs/cert.pem
I0927 00:15:26.094438 127143 main.go:141] libmachine: Decoding PEM data...
I0927 00:15:26.094452 127143 main.go:141] libmachine: Parsing certificate...
I0927 00:15:26.094906 127143 client.go:171] duration metric: took 749.826µs to LocalClient.Create
I0927 00:15:26.094942 127143 start.go:167] duration metric: took 869.44µs to libmachine.API.Create "minikube"
I0927 00:15:26.094952 127143 start.go:293] postStartSetup for "minikube" (driver="none")
I0927 00:15:26.095013 127143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0927 00:15:26.095096 127143 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0927 00:15:26.104789 127143 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0927 00:15:26.104825 127143 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0927 00:15:26.104840 127143 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0927 00:15:26.106628 127143 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0927 00:15:26.107863 127143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-116460/.minikube/addons for local assets ...
I0927 00:15:26.107962 127143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-116460/.minikube/files for local assets ...
I0927 00:15:26.107993 127143 start.go:296] duration metric: took 13.034274ms for postStartSetup
I0927 00:15:26.108949 127143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json ...
I0927 00:15:26.109169 127143 start.go:128] duration metric: took 20.04494ms to createHost
I0927 00:15:26.109188 127143 start.go:83] releasing machines lock for "minikube", held for 20.17718ms
I0927 00:15:26.109677 127143 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0927 00:15:26.109738 127143 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0927 00:15:26.111918 127143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0927 00:15:26.111963 127143 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0927 00:15:26.120512 127143 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0927 00:15:26.120570 127143 start.go:495] detecting cgroup driver to use...
I0927 00:15:26.120604 127143 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0927 00:15:26.121149 127143 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0927 00:15:26.139800 127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0927 00:15:26.149822 127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0927 00:15:26.159032 127143 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0927 00:15:26.159111 127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0927 00:15:26.170218 127143 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0927 00:15:26.179518 127143 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0927 00:15:26.189860 127143 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0927 00:15:26.200070 127143 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0927 00:15:26.210791 127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0927 00:15:26.221057 127143 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0927 00:15:26.232327 127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0927 00:15:26.243052 127143 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0927 00:15:26.254136 127143 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0927 00:15:26.263239 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:26.509238 127143 exec_runner.go:51] Run: sudo systemctl restart containerd
I0927 00:15:26.625304 127143 start.go:495] detecting cgroup driver to use...
I0927 00:15:26.625374 127143 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0927 00:15:26.625477 127143 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0927 00:15:26.648028 127143 exec_runner.go:51] Run: which cri-dockerd
I0927 00:15:26.649114 127143 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0927 00:15:26.658441 127143 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0927 00:15:26.658468 127143 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0927 00:15:26.658507 127143 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0927 00:15:26.667029 127143 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0927 00:15:26.667230 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3453152198 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0927 00:15:26.677071 127143 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0927 00:15:26.908822 127143 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0927 00:15:27.139677 127143 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0927 00:15:27.139814 127143 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0927 00:15:27.139830 127143 exec_runner.go:203] rm: /etc/docker/daemon.json
I0927 00:15:27.139866 127143 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0927 00:15:27.148856 127143 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0927 00:15:27.149107 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1647215183 /etc/docker/daemon.json
I0927 00:15:27.158297 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:27.363636 127143 exec_runner.go:51] Run: sudo systemctl restart docker
I0927 00:15:27.774460 127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0927 00:15:27.786737 127143 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0927 00:15:27.803946 127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0927 00:15:27.815843 127143 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0927 00:15:28.032042 127143 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0927 00:15:28.241421 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:28.461380 127143 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0927 00:15:28.476952 127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0927 00:15:28.488054 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:28.700508 127143 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0927 00:15:28.772855 127143 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0927 00:15:28.772945 127143 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0927 00:15:28.774419 127143 start.go:563] Will wait 60s for crictl version
I0927 00:15:28.774467 127143 exec_runner.go:51] Run: which crictl
I0927 00:15:28.775334 127143 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0927 00:15:28.806819 127143 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.3.1
RuntimeApiVersion: v1
I0927 00:15:28.806901 127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0927 00:15:28.830163 127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0927 00:15:28.854192 127143 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
I0927 00:15:28.854304 127143 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0927 00:15:28.857252 127143 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0927 00:15:28.858663 127143 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0927 00:15:28.858793 127143 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:15:28.858806 127143 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.1 docker true true} ...
I0927 00:15:28.858916 127143 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0927 00:15:28.858977 127143 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0927 00:15:28.911738 127143 cni.go:84] Creating CNI manager for ""
I0927 00:15:28.911764 127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:15:28.911777 127143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0927 00:15:28.911807 127143 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0927 00:15:28.912002 127143 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.154.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-9"
kubeletExtraArgs:
node-ip: 10.154.0.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0927 00:15:28.912086 127143 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
I0927 00:15:28.920769 127143 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
Initiating transfer...
I0927 00:15:28.920840 127143 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
I0927 00:15:28.928785 127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
I0927 00:15:28.928805 127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
I0927 00:15:28.928849 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
I0927 00:15:28.928850 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
I0927 00:15:28.928805 127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
I0927 00:15:28.929026 127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0927 00:15:28.941335 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
I0927 00:15:28.979871 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3204681781 /var/lib/minikube/binaries/v1.31.1/kubectl
I0927 00:15:28.980059 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218151488 /var/lib/minikube/binaries/v1.31.1/kubeadm
I0927 00:15:29.025178 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1751216618 /var/lib/minikube/binaries/v1.31.1/kubelet
I0927 00:15:29.095573 127143 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0927 00:15:29.104241 127143 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0927 00:15:29.104266 127143 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0927 00:15:29.104313 127143 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0927 00:15:29.112746 127143 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
I0927 00:15:29.112943 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2724996801 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0927 00:15:29.122116 127143 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0927 00:15:29.122149 127143 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0927 00:15:29.122208 127143 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0927 00:15:29.130497 127143 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0927 00:15:29.130661 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1833491100 /lib/systemd/system/kubelet.service
I0927 00:15:29.140449 127143 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
I0927 00:15:29.140577 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3226971426 /var/tmp/minikube/kubeadm.yaml.new
I0927 00:15:29.149392 127143 exec_runner.go:51] Run: grep 10.154.0.4 control-plane.minikube.internal$ /etc/hosts
I0927 00:15:29.150749 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:29.377259 127143 exec_runner.go:51] Run: sudo systemctl start kubelet
I0927 00:15:29.391747 127143 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube for IP: 10.154.0.4
I0927 00:15:29.391775 127143 certs.go:194] generating shared ca certs ...
I0927 00:15:29.391817 127143 certs.go:226] acquiring lock for ca certs: {Name:mk756c5fab023c128c8a1ee40b210d4906fcf7ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.391976 127143 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.key
I0927 00:15:29.392023 127143 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.key
I0927 00:15:29.392037 127143 certs.go:256] generating profile certs ...
I0927 00:15:29.392113 127143 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key
I0927 00:15:29.392132 127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt with IP's: []
I0927 00:15:29.511224 127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt ...
I0927 00:15:29.511259 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt: {Name:mk523f5ef9545f5657d8fdc08dc03deac1b0df8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.511434 127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key ...
I0927 00:15:29.511451 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key: {Name:mkc69db3956d3a764b405bfb9fb4610e0667c104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.511544 127143 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6
I0927 00:15:29.511560 127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
I0927 00:15:29.694902 127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
I0927 00:15:29.694938 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk7addcd2141e6f37fc5edcf6970dd4475e3537a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.695128 127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
I0927 00:15:29.695158 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk73cbc0fb0bc62c3a7760cbeaa9d0ff4b5b0b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.695256 127143 certs.go:381] copying /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt
I0927 00:15:29.695379 127143 certs.go:385] copying /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key
I0927 00:15:29.695467 127143 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key
I0927 00:15:29.695488 127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0927 00:15:29.856263 127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt ...
I0927 00:15:29.856304 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt: {Name:mkef23134c28d00bad2b1e8ae2ef253b7d3a6849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.856488 127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key ...
I0927 00:15:29.856506 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key: {Name:mk4fa663cf49f2e04c5a9b15417dfaa8afeced43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:29.856709 127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca-key.pem (1675 bytes)
I0927 00:15:29.856760 127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca.pem (1082 bytes)
I0927 00:15:29.856792 127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/cert.pem (1123 bytes)
I0927 00:15:29.856831 127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/key.pem (1679 bytes)
I0927 00:15:29.857548 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0927 00:15:29.857706 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2296640606 /var/lib/minikube/certs/ca.crt
I0927 00:15:29.866992 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0927 00:15:29.867179 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube146355848 /var/lib/minikube/certs/ca.key
I0927 00:15:29.875374 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0927 00:15:29.875522 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3179250502 /var/lib/minikube/certs/proxy-client-ca.crt
I0927 00:15:29.884851 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0927 00:15:29.885106 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654155351 /var/lib/minikube/certs/proxy-client-ca.key
I0927 00:15:29.892975 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0927 00:15:29.893198 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube313591325 /var/lib/minikube/certs/apiserver.crt
I0927 00:15:29.901665 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0927 00:15:29.901886 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube922025066 /var/lib/minikube/certs/apiserver.key
I0927 00:15:29.910010 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0927 00:15:29.910195 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube577114657 /var/lib/minikube/certs/proxy-client.crt
I0927 00:15:29.918339 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0927 00:15:29.918470 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3872470726 /var/lib/minikube/certs/proxy-client.key
I0927 00:15:29.926778 127143 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0927 00:15:29.926801 127143 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.926853 127143 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.934329 127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0927 00:15:29.934488 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2488842704 /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.942519 127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0927 00:15:29.942639 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1659676463 /var/lib/minikube/kubeconfig
I0927 00:15:29.950827 127143 exec_runner.go:51] Run: openssl version
I0927 00:15:29.953647 127143 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0927 00:15:29.962278 127143 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.963561 127143 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.963625 127143 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0927 00:15:29.966492 127143 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0927 00:15:29.974604 127143 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0927 00:15:29.975772 127143 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0927 00:15:29.975807 127143 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0927 00:15:29.975907 127143 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0927 00:15:29.990895 127143 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0927 00:15:30.000918 127143 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0927 00:15:30.009556 127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0927 00:15:30.030860 127143 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0927 00:15:30.040703 127143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0927 00:15:30.040730 127143 kubeadm.go:157] found existing configuration files:
I0927 00:15:30.040783 127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0927 00:15:30.048956 127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0927 00:15:30.049038 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0927 00:15:30.057984 127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0927 00:15:30.066136 127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0927 00:15:30.066197 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0927 00:15:30.074289 127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0927 00:15:30.082936 127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0927 00:15:30.083006 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0927 00:15:30.091783 127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0927 00:15:30.100647 127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0927 00:15:30.100710 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0927 00:15:30.108881 127143 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0927 00:15:30.150559 127143 kubeadm.go:310] W0927 00:15:30.150438 128015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0927 00:15:30.151040 127143 kubeadm.go:310] W0927 00:15:30.150996 128015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I0927 00:15:30.152679 127143 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
I0927 00:15:30.152724 127143 kubeadm.go:310] [preflight] Running pre-flight checks
I0927 00:15:30.252455 127143 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0927 00:15:30.252565 127143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0927 00:15:30.252578 127143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0927 00:15:30.252585 127143 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0927 00:15:30.264679 127143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0927 00:15:30.267119 127143 out.go:235] - Generating certificates and keys ...
I0927 00:15:30.267168 127143 kubeadm.go:310] [certs] Using existing ca certificate authority
I0927 00:15:30.267183 127143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0927 00:15:30.322639 127143 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0927 00:15:30.489694 127143 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0927 00:15:30.653716 127143 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0927 00:15:30.766107 127143 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0927 00:15:30.920743 127143 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0927 00:15:30.920813 127143 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0927 00:15:30.978828 127143 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0927 00:15:30.978868 127143 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
I0927 00:15:31.156136 127143 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0927 00:15:31.338767 127143 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0927 00:15:31.523801 127143 kubeadm.go:310] [certs] Generating "sa" key and public key
I0927 00:15:31.523974 127143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0927 00:15:31.633944 127143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0927 00:15:31.827967 127143 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0927 00:15:31.943031 127143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0927 00:15:32.007306 127143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0927 00:15:32.102454 127143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0927 00:15:32.103003 127143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0927 00:15:32.105469 127143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0927 00:15:32.107613 127143 out.go:235] - Booting up control plane ...
I0927 00:15:32.107644 127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0927 00:15:32.107663 127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0927 00:15:32.108360 127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0927 00:15:32.129296 127143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0927 00:15:32.134267 127143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0927 00:15:32.134327 127143 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0927 00:15:32.357211 127143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0927 00:15:32.357242 127143 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0927 00:15:32.858795 127143 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.564194ms
I0927 00:15:32.858825 127143 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0927 00:15:36.860407 127143 kubeadm.go:310] [api-check] The API server is healthy after 4.001590682s
I0927 00:15:36.873589 127143 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0927 00:15:36.885294 127143 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0927 00:15:36.906023 127143 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0927 00:15:36.906069 127143 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0927 00:15:36.914328 127143 kubeadm.go:310] [bootstrap-token] Using token: cb9aai.7468zzz9nketn421
I0927 00:15:36.915888 127143 out.go:235] - Configuring RBAC rules ...
I0927 00:15:36.915923 127143 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0927 00:15:36.920195 127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0927 00:15:36.928140 127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0927 00:15:36.930921 127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0927 00:15:36.934654 127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0927 00:15:36.937445 127143 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0927 00:15:37.268914 127143 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0927 00:15:37.700291 127143 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0927 00:15:38.268146 127143 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0927 00:15:38.268934 127143 kubeadm.go:310]
I0927 00:15:38.268956 127143 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0927 00:15:38.268961 127143 kubeadm.go:310]
I0927 00:15:38.268967 127143 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0927 00:15:38.268971 127143 kubeadm.go:310]
I0927 00:15:38.268976 127143 kubeadm.go:310] mkdir -p $HOME/.kube
I0927 00:15:38.268980 127143 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0927 00:15:38.269001 127143 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0927 00:15:38.269005 127143 kubeadm.go:310]
I0927 00:15:38.269017 127143 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0927 00:15:38.269021 127143 kubeadm.go:310]
I0927 00:15:38.269025 127143 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0927 00:15:38.269028 127143 kubeadm.go:310]
I0927 00:15:38.269031 127143 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0927 00:15:38.269034 127143 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0927 00:15:38.269037 127143 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0927 00:15:38.269039 127143 kubeadm.go:310]
I0927 00:15:38.269043 127143 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0927 00:15:38.269046 127143 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0927 00:15:38.269049 127143 kubeadm.go:310]
I0927 00:15:38.269051 127143 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cb9aai.7468zzz9nketn421 \
I0927 00:15:38.269055 127143 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:1d141527aaf1d2c0fb7b0adcde69f8a0a613dff7bc5dc95cc5153131c10474d3 \
I0927 00:15:38.269057 127143 kubeadm.go:310] --control-plane
I0927 00:15:38.269060 127143 kubeadm.go:310]
I0927 00:15:38.269063 127143 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0927 00:15:38.269065 127143 kubeadm.go:310]
I0927 00:15:38.269068 127143 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cb9aai.7468zzz9nketn421 \
I0927 00:15:38.269071 127143 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:1d141527aaf1d2c0fb7b0adcde69f8a0a613dff7bc5dc95cc5153131c10474d3
I0927 00:15:38.272027 127143 cni.go:84] Creating CNI manager for ""
I0927 00:15:38.272064 127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0927 00:15:38.273916 127143 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0927 00:15:38.275288 127143 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0927 00:15:38.286171 127143 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0927 00:15:38.286334 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3665517556 /etc/cni/net.d/1-k8s.conflist
I0927 00:15:38.297571 127143 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0927 00:15:38.297633 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:38.297659 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_27T00_15_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0927 00:15:38.306779 127143 ops.go:34] apiserver oom_adj: -16
I0927 00:15:38.366706 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:38.866850 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:39.367329 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:39.867701 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:40.367139 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:40.867268 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:41.367194 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:41.867194 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:42.366845 127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0927 00:15:42.437193 127143 kubeadm.go:1113] duration metric: took 4.139618607s to wait for elevateKubeSystemPrivileges
I0927 00:15:42.437231 127143 kubeadm.go:394] duration metric: took 12.461428273s to StartCluster
I0927 00:15:42.437253 127143 settings.go:142] acquiring lock: {Name:mk21aca334d9a656fcd6241902ed89386883726b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:42.437330 127143 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19711-116460/kubeconfig
I0927 00:15:42.438086 127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/kubeconfig: {Name:mk005d44ca8be515ae45a481c5d822e83fc3b66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0927 00:15:42.438424 127143 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0927 00:15:42.438636 127143 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:15:42.438568 127143 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0927 00:15:42.438702 127143 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0927 00:15:42.438719 127143 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0927 00:15:42.438723 127143 addons.go:69] Setting yakd=true in profile "minikube"
I0927 00:15:42.438742 127143 addons.go:234] Setting addon yakd=true in "minikube"
I0927 00:15:42.438750 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.438778 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.438774 127143 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0927 00:15:42.438804 127143 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0927 00:15:42.438798 127143 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0927 00:15:42.438818 127143 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0927 00:15:42.438832 127143 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0927 00:15:42.438846 127143 mustload.go:65] Loading cluster: minikube
I0927 00:15:42.438849 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.439111 127143 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:15:42.439442 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439459 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.439494 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439498 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439508 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.439511 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.439511 127143 addons.go:69] Setting registry=true in profile "minikube"
I0927 00:15:42.439526 127143 addons.go:234] Setting addon registry=true in "minikube"
I0927 00:15:42.439535 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.439538 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.439543 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439553 127143 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0927 00:15:42.439558 127143 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0927 00:15:42.439545 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439570 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.439571 127143 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0927 00:15:42.439593 127143 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0927 00:15:42.439602 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.439614 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.439648 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.439792 127143 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0927 00:15:42.439818 127143 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0927 00:15:42.439860 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.439991 127143 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0927 00:15:42.440023 127143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0927 00:15:42.440237 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.440253 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.440296 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.439560 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.440307 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.440328 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.440339 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.440545 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.440558 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.440590 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.440594 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.440609 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.440639 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.440300 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.441002 127143 addons.go:69] Setting volcano=true in profile "minikube"
I0927 00:15:42.441019 127143 addons.go:234] Setting addon volcano=true in "minikube"
I0927 00:15:42.441046 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.441692 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.441715 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.441745 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.442846 127143 out.go:177] * Configuring local host environment ...
I0927 00:15:42.439549 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.443643 127143 addons.go:69] Setting metrics-server=true in profile "minikube"
I0927 00:15:42.443723 127143 addons.go:234] Setting addon metrics-server=true in "minikube"
I0927 00:15:42.439499 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.443778 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.443912 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.443938 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.443975 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.443795 127143 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0927 00:15:42.444264 127143 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0927 00:15:42.444294 127143 host.go:66] Checking if "minikube" exists ...
W0927 00:15:42.446486 127143 out.go:270] *
W0927 00:15:42.446508 127143 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0927 00:15:42.446517 127143 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
W0927 00:15:42.446525 127143 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0927 00:15:42.446532 127143 out.go:270] *
W0927 00:15:42.446582 127143 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
W0927 00:15:42.446594 127143 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0927 00:15:42.446601 127143 out.go:270] *
W0927 00:15:42.446629 127143 out.go:270] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0927 00:15:42.446639 127143 out.go:270] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0927 00:15:42.446645 127143 out.go:270] *
W0927 00:15:42.446651 127143 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0927 00:15:42.446682 127143 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0927 00:15:42.450487 127143 out.go:177] * Verifying Kubernetes components...
I0927 00:15:42.451754 127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0927 00:15:42.460334 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.461499 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.463671 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.466089 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.467767 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.468364 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.471321 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.472257 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.472623 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.474011 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.474045 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.474139 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.474806 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.474832 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.474864 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.476217 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.476868 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.487357 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.487505 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.489822 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.489917 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.491704 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.491776 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.493650 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.493721 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.494198 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.494258 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.500622 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.500690 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.500978 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.504801 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.508501 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.508570 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.509098 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.511986 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.513961 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.513996 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.514774 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.514836 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.516623 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.516651 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.518144 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.518206 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.519344 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.519369 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.521209 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.521779 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.522243 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.530795 127143 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
I0927 00:15:42.532279 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.532303 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.532317 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.532323 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.532694 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.533673 127143 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0927 00:15:42.533737 127143 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0927 00:15:42.534015 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube361156128 /etc/kubernetes/addons/ig-namespace.yaml
I0927 00:15:42.534295 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.534313 127143 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
I0927 00:15:42.534317 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.536355 127143 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0927 00:15:42.536388 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0927 00:15:42.536516 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2060666848 /etc/kubernetes/addons/deployment.yaml
I0927 00:15:42.545482 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.545578 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.545636 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.545705 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.546501 127143 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0927 00:15:42.546543 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.547230 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.547247 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.547284 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.547560 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.547588 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.548301 127143 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0927 00:15:42.548916 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.548929 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.549408 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.549574 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.549620 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.550870 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0927 00:15:42.550966 127143 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0927 00:15:42.550992 127143 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0927 00:15:42.551148 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube74715924 /etc/kubernetes/addons/yakd-ns.yaml
I0927 00:15:42.554651 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0927 00:15:42.555527 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.555656 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.556026 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.557448 127143 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0927 00:15:42.557552 127143 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0927 00:15:42.557678 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.558966 127143 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0927 00:15:42.559016 127143 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
I0927 00:15:42.559142 127143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:42.559159 127143 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0927 00:15:42.559167 127143 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:42.559206 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:42.560451 127143 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0927 00:15:42.560481 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0927 00:15:42.560514 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.560533 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.560621 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2257949060 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0927 00:15:42.562656 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0927 00:15:42.562848 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.563778 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0927 00:15:42.563842 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.563892 127143 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
I0927 00:15:42.563897 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.564439 127143 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0927 00:15:42.564471 127143 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0927 00:15:42.564634 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1616076761 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0927 00:15:42.565433 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.567075 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0927 00:15:42.567170 127143 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
I0927 00:15:42.567277 127143 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0927 00:15:42.567337 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:42.568217 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:42.568269 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:42.568460 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:42.568510 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.568531 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.572947 127143 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0927 00:15:42.573737 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.574224 127143 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
I0927 00:15:42.574595 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0927 00:15:42.574636 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0927 00:15:42.574789 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube187722530 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0927 00:15:42.575355 127143 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0927 00:15:42.576516 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0927 00:15:42.576557 127143 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0927 00:15:42.576702 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100290210 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0927 00:15:42.577278 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.578424 127143 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0927 00:15:42.578511 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
I0927 00:15:42.579112 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube241673189 /etc/kubernetes/addons/volcano-deployment.yaml
I0927 00:15:42.581851 127143 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0927 00:15:42.581889 127143 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0927 00:15:42.582021 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3790915627 /etc/kubernetes/addons/yakd-sa.yaml
I0927 00:15:42.582771 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0927 00:15:42.589366 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0927 00:15:42.589548 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2461086298 /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:42.589796 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.589819 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.590103 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0927 00:15:42.590158 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0927 00:15:42.590284 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3455873887 /etc/kubernetes/addons/rbac-hostpath.yaml
I0927 00:15:42.590995 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0927 00:15:42.593580 127143 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0927 00:15:42.593611 127143 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0927 00:15:42.593725 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube919025420 /etc/kubernetes/addons/yakd-crb.yaml
I0927 00:15:42.595291 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.597430 127143 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
I0927 00:15:42.599001 127143 out.go:177] - Using image docker.io/registry:2.8.3
I0927 00:15:42.600628 127143 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0927 00:15:42.600664 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0927 00:15:42.600839 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube657484236 /etc/kubernetes/addons/registry-rc.yaml
I0927 00:15:42.601106 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.601135 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.606793 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.606948 127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0927 00:15:42.606977 127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0927 00:15:42.607100 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:42.607114 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2137832905 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0927 00:15:42.608745 127143 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0927 00:15:42.610422 127143 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0927 00:15:42.610463 127143 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0927 00:15:42.611427 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1625268335 /etc/kubernetes/addons/metrics-apiservice.yaml
I0927 00:15:42.611495 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.611552 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.614470 127143 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0927 00:15:42.614506 127143 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0927 00:15:42.614649 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1013574020 /etc/kubernetes/addons/ig-role.yaml
I0927 00:15:42.615167 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0927 00:15:42.619911 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0927 00:15:42.622799 127143 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0927 00:15:42.622831 127143 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0927 00:15:42.623090 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube734472719 /etc/kubernetes/addons/yakd-svc.yaml
I0927 00:15:42.628327 127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0927 00:15:42.628372 127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0927 00:15:42.628594 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube234760308 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0927 00:15:42.631446 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:42.631510 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:42.634914 127143 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0927 00:15:42.638481 127143 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0927 00:15:42.638527 127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0927 00:15:42.638531 127143 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0927 00:15:42.638615 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0927 00:15:42.638844 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3251734669 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0927 00:15:42.641622 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1622212682 /etc/kubernetes/addons/registry-svc.yaml
I0927 00:15:42.644895 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0927 00:15:42.644934 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0927 00:15:42.645198 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284046721 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0927 00:15:42.647339 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.647366 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.648574 127143 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0927 00:15:42.648608 127143 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0927 00:15:42.648833 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3380755994 /etc/kubernetes/addons/ig-rolebinding.yaml
I0927 00:15:42.652117 127143 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0927 00:15:42.652149 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0927 00:15:42.652291 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1755796444 /etc/kubernetes/addons/yakd-dp.yaml
I0927 00:15:42.661158 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.661218 127143 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:42.661236 127143 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0927 00:15:42.661244 127143 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:42.661286 127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:42.667316 127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0927 00:15:42.667353 127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0927 00:15:42.667525 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3659750276 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0927 00:15:42.668724 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:42.668749 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:42.680009 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:42.682051 127143 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0927 00:15:42.683702 127143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0927 00:15:42.683739 127143 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0927 00:15:42.683879 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3605290430 /etc/kubernetes/addons/ig-clusterrole.yaml
I0927 00:15:42.684902 127143 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0927 00:15:42.685298 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube931078911 /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:42.689730 127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0927 00:15:42.689767 127143 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0927 00:15:42.690021 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3067791732 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0927 00:15:42.691621 127143 out.go:177] - Using image docker.io/busybox:stable
I0927 00:15:42.703220 127143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0927 00:15:42.703378 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0927 00:15:42.703784 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2302943204 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0927 00:15:42.714098 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0927 00:15:42.714366 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0927 00:15:42.714389 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0927 00:15:42.714523 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3912720131 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0927 00:15:42.715061 127143 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0927 00:15:42.715088 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0927 00:15:42.715206 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube491324605 /etc/kubernetes/addons/registry-proxy.yaml
I0927 00:15:42.719732 127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0927 00:15:42.719773 127143 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0927 00:15:42.719914 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2321845525 /etc/kubernetes/addons/metrics-server-service.yaml
I0927 00:15:42.727124 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0927 00:15:42.729285 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0927 00:15:42.729327 127143 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0927 00:15:42.730144 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2687041217 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0927 00:15:42.753650 127143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0927 00:15:42.753688 127143 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0927 00:15:42.754524 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube776664454 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0927 00:15:42.754130 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0927 00:15:42.754705 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0927 00:15:42.754858 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube28539729 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0927 00:15:42.755797 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0927 00:15:42.757907 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0927 00:15:42.757942 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0927 00:15:42.774276 127143 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0927 00:15:42.774320 127143 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0927 00:15:42.774488 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3808091519 /etc/kubernetes/addons/ig-crd.yaml
I0927 00:15:42.775819 127143 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:42.775852 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0927 00:15:42.775986 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3737515673 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:42.798259 127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0927 00:15:42.798306 127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0927 00:15:42.798450 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1723079502 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0927 00:15:42.824046 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:42.852092 127143 exec_runner.go:51] Run: sudo systemctl start kubelet
I0927 00:15:42.863846 127143 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0927 00:15:42.863891 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0927 00:15:42.864058 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1905860265 /etc/kubernetes/addons/ig-daemonset.yaml
I0927 00:15:42.892477 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0927 00:15:42.892844 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0927 00:15:42.893065 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2552489540 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0927 00:15:42.898622 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0927 00:15:42.937933 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0927 00:15:42.937979 127143 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0927 00:15:42.938136 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2797859149 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0927 00:15:42.942211 127143 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
I0927 00:15:42.945746 127143 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
I0927 00:15:42.945774 127143 node_ready.go:38] duration metric: took 3.52972ms for node "ubuntu-20-agent-9" to be "Ready" ...
I0927 00:15:42.945786 127143 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0927 00:15:42.955399 127143 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:42.993568 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0927 00:15:42.993624 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0927 00:15:42.993806 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1621205174 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0927 00:15:43.011347 127143 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0927 00:15:43.059524 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0927 00:15:43.059565 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0927 00:15:43.059716 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2777152784 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0927 00:15:43.158237 127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0927 00:15:43.158290 127143 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0927 00:15:43.158443 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2913326724 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0927 00:15:43.281030 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0927 00:15:43.515014 127143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0927 00:15:43.601104 127143 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0927 00:15:43.833268 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.075278624s)
I0927 00:15:43.833308 127143 addons.go:475] Verifying addon registry=true in "minikube"
I0927 00:15:43.837520 127143 out.go:177] * Verifying registry addon...
I0927 00:15:43.841404 127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0927 00:15:43.852143 127143 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0927 00:15:43.852169 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:43.852498 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.096655141s)
I0927 00:15:43.852528 127143 addons.go:475] Verifying addon metrics-server=true in "minikube"
I0927 00:15:43.917505 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.018807483s)
I0927 00:15:43.985146 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.257781693s)
I0927 00:15:44.348048 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:44.645487 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.821377634s)
W0927 00:15:44.645534 127143 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0927 00:15:44.645561 127143 retry.go:31] will retry after 128.327586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0927 00:15:44.777214 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0927 00:15:44.846675 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:44.966858 127143 pod_ready.go:103] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:45.352597 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:45.637949 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.022726323s)
I0927 00:15:45.811423 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.530320851s)
I0927 00:15:45.811465 127143 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
I0927 00:15:45.819721 127143 out.go:177] * Verifying csi-hostpath-driver addon...
I0927 00:15:45.822245 127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:15:45.827421 127143 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:15:45.827450 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:45.846299 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:45.963134 127143 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0927 00:15:45.963170 127143 pod_ready.go:82] duration metric: took 3.007652986s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:45.963184 127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:46.327672 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:46.428359 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:46.827601 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:46.845189 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:47.326843 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:47.426377 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:47.627079 127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.849808217s)
I0927 00:15:47.827896 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:47.844918 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:47.968676 127143 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:48.328049 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:48.428765 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:48.827658 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:48.845931 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:49.327764 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:49.428298 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:49.539656 127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0927 00:15:49.539813 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4150585855 /var/lib/minikube/google_application_credentials.json
I0927 00:15:49.553393 127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0927 00:15:49.553541 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217750436 /var/lib/minikube/google_cloud_project
I0927 00:15:49.565547 127143 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0927 00:15:49.565614 127143 host.go:66] Checking if "minikube" exists ...
I0927 00:15:49.566424 127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
I0927 00:15:49.566454 127143 api_server.go:166] Checking apiserver status ...
I0927 00:15:49.566499 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:49.586819 127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
I0927 00:15:49.598122 127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
I0927 00:15:49.598201 127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
I0927 00:15:49.608404 127143 api_server.go:204] freezer state: "THAWED"
I0927 00:15:49.608444 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:49.613890 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:49.613963 127143 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0927 00:15:49.619988 127143 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0927 00:15:49.621698 127143 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
I0927 00:15:49.622992 127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0927 00:15:49.623022 127143 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0927 00:15:49.623141 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2286610540 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0927 00:15:49.634254 127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0927 00:15:49.634292 127143 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0927 00:15:49.634418 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3826646702 /etc/kubernetes/addons/gcp-auth-service.yaml
I0927 00:15:49.650371 127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0927 00:15:49.650407 127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0927 00:15:49.650541 127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4210990779 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0927 00:15:49.662264 127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0927 00:15:49.826811 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:49.845658 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:49.969808 127143 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
I0927 00:15:50.068528 127143 addons.go:475] Verifying addon gcp-auth=true in "minikube"
I0927 00:15:50.071321 127143 out.go:177] * Verifying gcp-auth addon...
I0927 00:15:50.073946 127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0927 00:15:50.076404 127143 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0927 00:15:50.328176 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:50.345413 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:50.470435 127143 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0927 00:15:50.470461 127143 pod_ready.go:82] duration metric: took 4.507268273s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:50.470480 127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:50.475866 127143 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0927 00:15:50.475893 127143 pod_ready.go:82] duration metric: took 5.404124ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:50.475906 127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:50.483762 127143 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
I0927 00:15:50.483788 127143 pod_ready.go:82] duration metric: took 7.872857ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
I0927 00:15:50.483799 127143 pod_ready.go:39] duration metric: took 7.537998935s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0927 00:15:50.483829 127143 api_server.go:52] waiting for apiserver process to appear ...
I0927 00:15:50.483900 127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0927 00:15:50.505484 127143 api_server.go:72] duration metric: took 8.058753379s to wait for apiserver process to appear ...
I0927 00:15:50.505515 127143 api_server.go:88] waiting for apiserver healthz status ...
I0927 00:15:50.505540 127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
I0927 00:15:50.510531 127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
ok
I0927 00:15:50.511716 127143 api_server.go:141] control plane version: v1.31.1
I0927 00:15:50.511744 127143 api_server.go:131] duration metric: took 6.223066ms to wait for apiserver health ...
I0927 00:15:50.511752 127143 system_pods.go:43] waiting for kube-system pods to appear ...
I0927 00:15:50.520652 127143 system_pods.go:59] 16 kube-system pods found
I0927 00:15:50.520695 127143 system_pods.go:61] "coredns-7c65d6cfc9-ngvr4" [2c728ce6-d71d-4c64-b8c8-34d355c60149] Running
I0927 00:15:50.520709 127143 system_pods.go:61] "csi-hostpath-attacher-0" [df77f5ca-6299-459d-aafa-f0969d70ecbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0927 00:15:50.520718 127143 system_pods.go:61] "csi-hostpath-resizer-0" [df25bef3-80be-42a6-a819-9fa1f1302d97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0927 00:15:50.520731 127143 system_pods.go:61] "csi-hostpathplugin-9646r" [219d4a80-1ca9-4901-b888-e20a6ee002b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0927 00:15:50.520742 127143 system_pods.go:61] "etcd-ubuntu-20-agent-9" [d1556729-ab60-4fb3-a865-8570ee4621fa] Running
I0927 00:15:50.520752 127143 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [64869e41-d09d-4b88-b49f-16fa0da814dd] Running
I0927 00:15:50.520763 127143 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [d234ccc4-c9d8-408f-a89f-b7e0c4f2adaa] Running
I0927 00:15:50.520771 127143 system_pods.go:61] "kube-proxy-r2kqg" [220c7678-ba9d-42fd-b333-f93c0854dd8f] Running
I0927 00:15:50.520783 127143 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [5718108b-d751-4c17-85b4-3a54a6e03dae] Running
I0927 00:15:50.520791 127143 system_pods.go:61] "metrics-server-84c5f94fbc-zb9hk" [8f9b05a4-44d3-4031-a273-6c55abe9fb84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0927 00:15:50.520797 127143 system_pods.go:61] "nvidia-device-plugin-daemonset-rkscq" [b8bdd9f8-c0ca-4711-bd0d-a07df1e4fded] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I0927 00:15:50.520805 127143 system_pods.go:61] "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0927 00:15:50.520811 127143 system_pods.go:61] "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0927 00:15:50.520816 127143 system_pods.go:61] "snapshot-controller-56fcc65765-2wjqz" [f64ff038-32d1-4307-96df-d385f96a0efa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:15:50.520822 127143 system_pods.go:61] "snapshot-controller-56fcc65765-g8jrp" [5b1ced1b-2ef9-458a-b4eb-74e96136ca34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:15:50.520825 127143 system_pods.go:61] "storage-provisioner" [15bb45eb-db23-4bae-9e56-982f2031327d] Running
I0927 00:15:50.520831 127143 system_pods.go:74] duration metric: took 9.072732ms to wait for pod list to return data ...
I0927 00:15:50.520838 127143 default_sa.go:34] waiting for default service account to be created ...
I0927 00:15:50.523766 127143 default_sa.go:45] found service account: "default"
I0927 00:15:50.523796 127143 default_sa.go:55] duration metric: took 2.950985ms for default service account to be created ...
I0927 00:15:50.523808 127143 system_pods.go:116] waiting for k8s-apps to be running ...
I0927 00:15:50.532797 127143 system_pods.go:86] 16 kube-system pods found
I0927 00:15:50.532832 127143 system_pods.go:89] "coredns-7c65d6cfc9-ngvr4" [2c728ce6-d71d-4c64-b8c8-34d355c60149] Running
I0927 00:15:50.532845 127143 system_pods.go:89] "csi-hostpath-attacher-0" [df77f5ca-6299-459d-aafa-f0969d70ecbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0927 00:15:50.532854 127143 system_pods.go:89] "csi-hostpath-resizer-0" [df25bef3-80be-42a6-a819-9fa1f1302d97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0927 00:15:50.532873 127143 system_pods.go:89] "csi-hostpathplugin-9646r" [219d4a80-1ca9-4901-b888-e20a6ee002b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0927 00:15:50.532884 127143 system_pods.go:89] "etcd-ubuntu-20-agent-9" [d1556729-ab60-4fb3-a865-8570ee4621fa] Running
I0927 00:15:50.532892 127143 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [64869e41-d09d-4b88-b49f-16fa0da814dd] Running
I0927 00:15:50.532902 127143 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [d234ccc4-c9d8-408f-a89f-b7e0c4f2adaa] Running
I0927 00:15:50.532909 127143 system_pods.go:89] "kube-proxy-r2kqg" [220c7678-ba9d-42fd-b333-f93c0854dd8f] Running
I0927 00:15:50.532918 127143 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [5718108b-d751-4c17-85b4-3a54a6e03dae] Running
I0927 00:15:50.532927 127143 system_pods.go:89] "metrics-server-84c5f94fbc-zb9hk" [8f9b05a4-44d3-4031-a273-6c55abe9fb84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0927 00:15:50.532939 127143 system_pods.go:89] "nvidia-device-plugin-daemonset-rkscq" [b8bdd9f8-c0ca-4711-bd0d-a07df1e4fded] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I0927 00:15:50.532951 127143 system_pods.go:89] "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0927 00:15:50.532962 127143 system_pods.go:89] "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0927 00:15:50.532972 127143 system_pods.go:89] "snapshot-controller-56fcc65765-2wjqz" [f64ff038-32d1-4307-96df-d385f96a0efa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:15:50.533003 127143 system_pods.go:89] "snapshot-controller-56fcc65765-g8jrp" [5b1ced1b-2ef9-458a-b4eb-74e96136ca34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0927 00:15:50.533013 127143 system_pods.go:89] "storage-provisioner" [15bb45eb-db23-4bae-9e56-982f2031327d] Running
I0927 00:15:50.533032 127143 system_pods.go:126] duration metric: took 9.207708ms to wait for k8s-apps to be running ...
I0927 00:15:50.533044 127143 system_svc.go:44] waiting for kubelet service to be running ....
I0927 00:15:50.533104 127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0927 00:15:50.548603 127143 system_svc.go:56] duration metric: took 15.544485ms WaitForService to wait for kubelet
I0927 00:15:50.548635 127143 kubeadm.go:582] duration metric: took 8.101914346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0927 00:15:50.548661 127143 node_conditions.go:102] verifying NodePressure condition ...
I0927 00:15:50.552115 127143 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0927 00:15:50.552148 127143 node_conditions.go:123] node cpu capacity is 8
I0927 00:15:50.552162 127143 node_conditions.go:105] duration metric: took 3.495203ms to run NodePressure ...
I0927 00:15:50.552177 127143 start.go:241] waiting for startup goroutines ...
I0927 00:15:50.828054 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:50.845416 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:51.327952 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:51.344874 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:51.827796 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:51.845319 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:52.327207 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:52.345385 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:52.829746 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:52.927007 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:53.326848 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:53.344620 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:53.826928 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:53.845195 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:54.326132 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:54.345443 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:54.826978 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:54.845308 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:55.327393 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:55.345255 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:55.826291 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:55.845942 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:56.326997 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:56.345670 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:56.924639 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:56.925497 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:57.327149 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:57.345390 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:57.827488 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:57.846116 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:58.327628 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:58.345906 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:58.827098 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:58.845553 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:59.327540 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:59.564547 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:15:59.826747 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:15:59.844934 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:00.326565 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:00.345209 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:00.827691 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:00.845031 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:01.327982 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:01.345404 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:01.827212 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:01.845855 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.327872 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:02.345203 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.940267 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:02.941017 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:03.327971 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:03.345195 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:03.827462 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:03.845448 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:04.334793 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:04.344399 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:04.827146 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:04.845669 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:05.327345 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:05.345977 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:05.827534 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:05.846001 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:06.326389 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:06.344789 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:06.827064 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:06.845070 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:07.327059 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:07.345837 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:07.828064 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:07.845064 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:08.326876 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:08.426842 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:08.826565 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:08.845549 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:09.326909 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:09.344931 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:09.826800 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:09.844668 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:10.326171 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:10.346055 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:10.827002 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:10.845490 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:11.327411 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:11.344554 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0927 00:16:11.827124 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:11.846218 127143 kapi.go:107] duration metric: took 28.0048158s to wait for kubernetes.io/minikube-addons=registry ...
I0927 00:16:12.326521 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:12.827052 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:13.327446 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:13.826734 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:14.326392 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:14.827006 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:15.329829 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:15.826974 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:16.326781 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:16.827948 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:17.327439 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:17.827170 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:18.327204 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:18.826640 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:19.327793 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:19.827366 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:20.326788 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:20.828043 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:21.327075 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:21.828376 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:22.327760 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:22.828130 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:23.326998 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:23.870741 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:24.328222 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:24.827217 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:25.327373 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:25.827429 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:26.327147 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:26.827541 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:27.327878 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:27.827843 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:28.327667 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:28.827865 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0927 00:16:29.326240 127143 kapi.go:107] duration metric: took 43.503994746s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0927 00:16:31.577867 127143 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0927 00:16:31.577892 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:32.078136 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:32.577560 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:33.078252 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:33.577596 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:34.078027 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:34.577189 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:35.077783 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:35.578117 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:36.077726 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:36.578325 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:37.077808 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:37.577442 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:38.077883 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:38.576922 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:39.077555 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:39.577802 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:40.077830 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:40.576835 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:41.077543 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:41.578140 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:42.077791 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:42.577429 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:43.077361 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:43.577289 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:44.078476 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:44.577723 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:45.077896 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:45.577421 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:46.078476 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:46.577496 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:47.077958 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:47.577433 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:48.077570 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:48.577579 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:49.078175 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:49.577902 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:50.077368 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:50.577391 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:51.078013 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:51.577459 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:52.077490 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:52.577863 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:53.077951 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:53.578272 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:54.077450 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:54.577573 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:55.077667 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:55.577806 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:56.079165 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:56.578243 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:57.077713 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:57.578555 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:58.078106 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:58.577224 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:59.077647 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:16:59.578198 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:00.077307 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:00.577323 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:01.077475 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:01.577859 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:02.077384 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:02.577178 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:03.077237 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:03.577919 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:04.077221 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:04.577225 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:05.077116 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:05.577735 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:06.077702 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:06.578549 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:07.077997 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:07.577518 127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0927 00:17:08.077848 127143 kapi.go:107] duration metric: took 1m18.003900162s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0927 00:17:08.079811 127143 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0927 00:17:08.081174 127143 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0927 00:17:08.082714 127143 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0927 00:17:08.084257 127143 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0927 00:17:08.085666 127143 addons.go:510] duration metric: took 1m25.647244924s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner yakd metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
I0927 00:17:08.085733 127143 start.go:246] waiting for cluster config update ...
I0927 00:17:08.085757 127143 start.go:255] writing updated cluster config ...
I0927 00:17:08.086173 127143 exec_runner.go:51] Run: rm -f paused
I0927 00:17:08.131608 127143 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
I0927 00:17:08.133847 127143 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Fri 2024-09-20 09:35:01 UTC, end at Fri 2024-09-27 00:27:01 UTC. --
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.260902721Z" level=error msg="Error running exec 8f6fce7b85494bb989da6ef75ed7d23504a21c1837b6f78f345439ca76592043 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=73d2257ada5f6b41 traceID=f4c71e2cd8f5468baa3f08c78ff913d9
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.261021424Z" level=error msg="Error running exec f9b42e182c3f544f16b8e520f7ff27d4a1db9bf46c92d3c73d09d0cc026bd398 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=fe17ccaa7ed484a2 traceID=d75534bef09f645d796fea077a17d190
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427035575Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427035610Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427765319Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427788353Z" level=error msg="stream copy error: reading from a closed fifo"
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.429080063Z" level=error msg="Error running exec efa3d5401188a5731dd13bf470f608dee1855545736d049c7fe4a9252671173e in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=c55f759e9e7de4df traceID=f0d584bec7844ce0f9da22fcb525d258
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.429784326Z" level=error msg="Error running exec f1e7cfac7879ae1f805c3317fdffb8236f0e017d42fde8406905039c3a6f3ac0 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=33c3e99e37cfad30 traceID=899a79309f45bf2f14582dc77e10915d
Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.470407684Z" level=info msg="ignoring event" container=ce0f95495465f6082ad910b695a398fd1abb55f85605cdea136b77cefb462fe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:22:28 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:22:28Z" level=error msg="error getting RW layer size for container ID 'dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167': Error response from daemon: No such container: dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167"
Sep 27 00:22:28 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:22:28Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167'"
Sep 27 00:23:39 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:23:39.942728851Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=dc9e9c8fe9ad1f4c traceID=988b9d8e984c98bf88d12e5db10bd987
Sep 27 00:23:39 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:23:39.945119098Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=dc9e9c8fe9ad1f4c traceID=988b9d8e984c98bf88d12e5db10bd987
Sep 27 00:26:00 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:26:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/05a0c40afc1b105ed2b99466a8ded414220e7d66cc4fd14513ed8537f9533408/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Sep 27 00:26:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:00.968223814Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b643a0a4af457ce traceID=381265c75db230cfacf9a4cecad65a53
Sep 27 00:26:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:00.970742929Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b643a0a4af457ce traceID=381265c75db230cfacf9a4cecad65a53
Sep 27 00:26:11 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:11.945337285Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=da6241dedf25dedc traceID=c8b5153b61ee4151b35f2ea417ecc055
Sep 27 00:26:11 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:11.947699648Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=da6241dedf25dedc traceID=c8b5153b61ee4151b35f2ea417ecc055
Sep 27 00:26:34 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:34.941175451Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6fa477e395867ce7 traceID=1034f779699e25e4f45d57ebeda1497d
Sep 27 00:26:34 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:34.943412390Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6fa477e395867ce7 traceID=1034f779699e25e4f45d57ebeda1497d
Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.392499477Z" level=info msg="ignoring event" container=05a0c40afc1b105ed2b99466a8ded414220e7d66cc4fd14513ed8537f9533408 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.678222896Z" level=info msg="ignoring event" container=c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.731925162Z" level=info msg="ignoring event" container=8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.813428728Z" level=info msg="ignoring event" container=758e8d161387b2b912d19e92304cb494569c51650f613d7ff053e817484e383e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.891019276Z" level=info msg="ignoring event" container=a6ee60d7c438368452a602c57d7e4c3406b0d4ba690c8da561c1ec2e78f47991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
ce0f95495465f ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec 4 minutes ago Exited gadget 6 7a11cf51cdfef gadget-zsrbc
ca7eac916e6f1 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 9 minutes ago Running gcp-auth 0 9a54d61c131b8 gcp-auth-89d5ffd79-68r2k
06b7ac20f7718 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 10 minutes ago Running csi-snapshotter 0 81523afd9c74d csi-hostpathplugin-9646r
e8960bbdc90f0 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 10 minutes ago Running csi-provisioner 0 81523afd9c74d csi-hostpathplugin-9646r
820915b2ac3b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 10 minutes ago Running liveness-probe 0 81523afd9c74d csi-hostpathplugin-9646r
d0814b540f12e registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 10 minutes ago Running hostpath 0 81523afd9c74d csi-hostpathplugin-9646r
8e1d359469553 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 10 minutes ago Running node-driver-registrar 0 81523afd9c74d csi-hostpathplugin-9646r
00b4ac66c18f9 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 10 minutes ago Running csi-resizer 0 302ea4417927a csi-hostpath-resizer-0
89d52d576f143 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 10 minutes ago Running csi-external-health-monitor-controller 0 81523afd9c74d csi-hostpathplugin-9646r
462b3fa12bda1 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 10 minutes ago Running csi-attacher 0 9d5d8cd6fa6e7 csi-hostpath-attacher-0
adbb46dace6f0 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 342cd307b62e2 snapshot-controller-56fcc65765-2wjqz
b9df1158720a1 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 10 minutes ago Running volume-snapshot-controller 0 9d1cc670d574a snapshot-controller-56fcc65765-g8jrp
adba821d320b5 rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 10 minutes ago Running local-path-provisioner 0 b4a2cc2a14139 local-path-provisioner-86d989889c-knfxk
8af29fe74f1ff gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982 10 minutes ago Exited registry-proxy 0 a6ee60d7c4383 registry-proxy-rbxpj
c3ee262cb7bba registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90 10 minutes ago Exited registry 0 758e8d161387b registry-66c9cd494c-5zfg4
73934b8e99884 registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9 10 minutes ago Running metrics-server 0 735c3f6e6084f metrics-server-84c5f94fbc-zb9hk
720baba6a0a50 marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 10 minutes ago Running yakd 0 f5cf42776ab9f yakd-dashboard-67d98fc6b-pn7ph
0ea696ac52da5 gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e 11 minutes ago Running cloud-spanner-emulator 0 5a34cb10739d6 cloud-spanner-emulator-5b584cc74-gppng
cc4778fb3760c nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47 11 minutes ago Running nvidia-device-plugin-ctr 0 cb890f7ad36eb nvidia-device-plugin-daemonset-rkscq
e7fc0464842cb c69fa2e9cbf5f 11 minutes ago Running coredns 0 5e3ff354ea15c coredns-7c65d6cfc9-ngvr4
04c3b6319c92f 6e38f40d628db 11 minutes ago Running storage-provisioner 0 aaaaca261e98f storage-provisioner
4bbbf5ccddce6 60c005f310ff3 11 minutes ago Running kube-proxy 0 edd77aaf6a0c9 kube-proxy-r2kqg
234705c660c04 175ffd71cce3d 11 minutes ago Running kube-controller-manager 0 9c2068a7695da kube-controller-manager-ubuntu-20-agent-9
dbeeaa776b168 2e96e5913fc06 11 minutes ago Running etcd 0 88f014fe5dc15 etcd-ubuntu-20-agent-9
dd123a1910a91 9aa1fad941575 11 minutes ago Running kube-scheduler 0 efe699020245d kube-scheduler-ubuntu-20-agent-9
9a96c9e4e13e6 6bab7719df100 11 minutes ago Running kube-apiserver 0 7099bd6337394 kube-apiserver-ubuntu-20-agent-9
==> coredns [e7fc0464842c] <==
[INFO] 10.244.0.9:33024 - 57761 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215701s
[INFO] 10.244.0.9:59900 - 61589 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082388s
[INFO] 10.244.0.9:59900 - 61300 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096587s
[INFO] 10.244.0.9:49832 - 6045 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000088311s
[INFO] 10.244.0.9:49832 - 5603 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000114274s
[INFO] 10.244.0.9:47003 - 11511 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000127998s
[INFO] 10.244.0.9:47003 - 11170 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000116317s
[INFO] 10.244.0.9:33332 - 11698 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085891s
[INFO] 10.244.0.9:33332 - 11435 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085608s
[INFO] 10.244.0.9:41602 - 16135 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078984s
[INFO] 10.244.0.9:41602 - 16550 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140059s
[INFO] 10.244.0.23:52742 - 57224 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000304535s
[INFO] 10.244.0.23:39146 - 34841 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000381091s
[INFO] 10.244.0.23:51632 - 34204 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161995s
[INFO] 10.244.0.23:39105 - 19433 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00021772s
[INFO] 10.244.0.23:48202 - 26605 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135991s
[INFO] 10.244.0.23:50207 - 30094 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188839s
[INFO] 10.244.0.23:33285 - 57009 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002580052s
[INFO] 10.244.0.23:46183 - 64236 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002562664s
[INFO] 10.244.0.23:50335 - 27680 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004121422s
[INFO] 10.244.0.23:48641 - 26642 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00472598s
[INFO] 10.244.0.23:37195 - 56263 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002239937s
[INFO] 10.244.0.23:47851 - 38383 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003468041s
[INFO] 10.244.0.23:39185 - 39105 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002931801s
[INFO] 10.244.0.23:34588 - 24477 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003062669s
==> describe nodes <==
Name: ubuntu-20-agent-9
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-9
kubernetes.io/os=linux
minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_09_27T00_15_38_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-9
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 27 Sep 2024 00:15:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-9
AcquireTime: <unset>
RenewTime: Fri, 27 Sep 2024 00:26:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 27 Sep 2024 00:22:47 +0000 Fri, 27 Sep 2024 00:15:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 27 Sep 2024 00:22:47 +0000 Fri, 27 Sep 2024 00:15:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 27 Sep 2024 00:22:47 +0000 Fri, 27 Sep 2024 00:15:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 27 Sep 2024 00:22:47 +0000 Fri, 27 Sep 2024 00:15:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.154.0.4
Hostname: ubuntu-20-agent-9
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859312Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 4894487b-7b30-e033-3a9d-c6f45b6c4cf8
Boot ID: 3c2d51bd-7f5b-4d40-a494-e8e3ec27c9f9
Kernel Version: 5.15.0-1069-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.3.1
Kubelet Version: v1.31.1
Kube-Proxy Version: v1.31.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (20 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m13s
default cloud-spanner-emulator-5b584cc74-gppng 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gadget gadget-zsrbc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
gcp-auth gcp-auth-89d5ffd79-68r2k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-7c65d6cfc9-ngvr4 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 11m
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system csi-hostpathplugin-9646r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system etcd-ubuntu-20-agent-9 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 11m
kube-system kube-apiserver-ubuntu-20-agent-9 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-ubuntu-20-agent-9 200m (2%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-r2kqg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-ubuntu-20-agent-9 100m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system metrics-server-84c5f94fbc-zb9hk 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 11m
kube-system nvidia-device-plugin-daemonset-rkscq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-2wjqz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system snapshot-controller-56fcc65765-g8jrp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
local-path-storage local-path-provisioner-86d989889c-knfxk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
yakd-dashboard yakd-dashboard-67d98fc6b-pn7ph 0 (0%) 0 (0%) 128Mi (0%) 256Mi (0%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 498Mi (1%) 426Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kube-proxy
Normal Starting 11m kubelet Starting kubelet.
Warning CgroupV1 11m kubelet Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
Normal RegisteredNode 11m node-controller Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
==> dmesg <==
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 75 f0 ba e0 bb 08 06
[ +1.354370] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 cb 1b 34 5d 43 08 06
[ +0.010250] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 f3 06 28 20 57 08 06
[ +2.816799] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 96 05 bc 08 21 08 06
[ +1.771058] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 ad cf 8b 10 5d 08 06
[ +2.020097] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c8 87 06 4d 2a 08 06
[ +5.837384] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 d4 6d c5 b4 4c 08 06
[ +0.062701] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff aa 17 bb bd 4e 31 08 06
[ +0.126366] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 02 b5 7b d0 2a 08 06
[ +28.674986] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a e2 2c 32 5b 91 08 06
[ +0.031487] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 72 dd 84 17 3a 08 06
[Sep27 00:17] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 2a 49 a4 35 f3 08 06
[ +0.000518] IPv4: martian source 10.244.0.23 from 10.244.0.6, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 ac 46 8e 90 7d 08 06
==> etcd [dbeeaa776b16] <==
{"level":"info","ts":"2024-09-27T00:15:34.383390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
{"level":"info","ts":"2024-09-27T00:15:34.383415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
{"level":"info","ts":"2024-09-27T00:15:34.383436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
{"level":"info","ts":"2024-09-27T00:15:34.383443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-27T00:15:34.383455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
{"level":"info","ts":"2024-09-27T00:15:34.383465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
{"level":"info","ts":"2024-09-27T00:15:34.384290Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:34.384778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-27T00:15:34.384776Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
{"level":"info","ts":"2024-09-27T00:15:34.384805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-09-27T00:15:34.385013Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:34.385085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-09-27T00:15:34.385104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-09-27T00:15:34.385124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:34.385153Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-09-27T00:15:34.386524Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-27T00:15:34.387499Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-09-27T00:15:34.388097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
{"level":"info","ts":"2024-09-27T00:15:34.388592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-09-27T00:15:57.063883Z","caller":"traceutil/trace.go:171","msg":"trace[223436893] transaction","detail":"{read_only:false; response_revision:877; number_of_response:1; }","duration":"137.954747ms","start":"2024-09-27T00:15:56.925907Z","end":"2024-09-27T00:15:57.063862Z","steps":["trace[223436893] 'process raft request' (duration: 129.183297ms)"],"step_count":1}
{"level":"warn","ts":"2024-09-27T00:16:02.937749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.196652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-09-27T00:16:02.937845Z","caller":"traceutil/trace.go:171","msg":"trace[1777800074] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:902; }","duration":"113.320376ms","start":"2024-09-27T00:16:02.824508Z","end":"2024-09-27T00:16:02.937829Z","steps":["trace[1777800074] 'range keys from in-memory index tree' (duration: 113.117232ms)"],"step_count":1}
{"level":"info","ts":"2024-09-27T00:25:34.439520Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1703}
{"level":"info","ts":"2024-09-27T00:25:34.463116Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1703,"took":"22.957679ms","hash":2922577851,"current-db-size-bytes":8384512,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4362240,"current-db-size-in-use":"4.4 MB"}
{"level":"info","ts":"2024-09-27T00:25:34.463182Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2922577851,"revision":1703,"compact-revision":-1}
==> gcp-auth [ca7eac916e6f] <==
2024/09/27 00:17:06 GCP Auth Webhook started!
2024/09/27 00:17:23 Ready to marshal response ...
2024/09/27 00:17:23 Ready to write response ...
2024/09/27 00:17:23 Ready to marshal response ...
2024/09/27 00:17:23 Ready to write response ...
2024/09/27 00:17:48 Ready to marshal response ...
2024/09/27 00:17:48 Ready to write response ...
2024/09/27 00:17:48 Ready to marshal response ...
2024/09/27 00:17:48 Ready to write response ...
2024/09/27 00:17:48 Ready to marshal response ...
2024/09/27 00:17:48 Ready to write response ...
2024/09/27 00:26:00 Ready to marshal response ...
2024/09/27 00:26:00 Ready to write response ...
==> kernel <==
00:27:01 up 2:09, 0 users, load average: 0.15, 0.34, 0.39
Linux ubuntu-20-agent-9 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [9a96c9e4e13e] <==
W0927 00:16:30.677780 1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.91.226:443: connect: connection refused
W0927 00:16:31.085501 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
E0927 00:16:31.085547 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
W0927 00:16:53.098408 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
E0927 00:16:53.098451 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
W0927 00:16:53.112089 1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
E0927 00:16:53.112201 1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
I0927 00:17:23.385752 1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
I0927 00:17:23.404645 1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
I0927 00:17:37.860763 1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
I0927 00:17:37.880102 1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
I0927 00:17:37.975732 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0927 00:17:37.991821 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0927 00:17:38.016645 1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
I0927 00:17:38.017662 1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
I0927 00:17:38.186693 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0927 00:17:38.204385 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
I0927 00:17:38.226569 1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
W0927 00:17:38.910654 1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
W0927 00:17:39.033990 1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
W0927 00:17:39.046922 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0927 00:17:39.124174 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0927 00:17:39.227003 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0927 00:17:39.305736 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0927 00:17:39.430827 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
==> kube-controller-manager [234705c660c0] <==
W0927 00:25:59.316067 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:25:59.316108 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:01.442959 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:01.443024 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:08.646840 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:08.646889 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:12.501731 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:12.501781 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:23.950067 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:23.950115 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:27.074730 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:27.074779 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:28.608445 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:28.608490 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:48.633823 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:48.633868 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:50.548199 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:50.548245 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:55.725638 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:55.725691 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:58.030860 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:58.030903 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0927 00:26:58.894926 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0927 00:26:58.894976 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0927 00:27:00.638165 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.757µs"
==> kube-proxy [4bbbf5ccddce] <==
I0927 00:15:44.189088 1 server_linux.go:66] "Using iptables proxy"
I0927 00:15:44.339338 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
E0927 00:15:44.339521 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0927 00:15:44.502667 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0927 00:15:44.502744 1 server_linux.go:169] "Using iptables Proxier"
I0927 00:15:44.569560 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0927 00:15:44.570047 1 server.go:483] "Version info" version="v1.31.1"
I0927 00:15:44.570077 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0927 00:15:44.572381 1 config.go:199] "Starting service config controller"
I0927 00:15:44.572414 1 shared_informer.go:313] Waiting for caches to sync for service config
I0927 00:15:44.572439 1 config.go:105] "Starting endpoint slice config controller"
I0927 00:15:44.572443 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0927 00:15:44.572950 1 config.go:328] "Starting node config controller"
I0927 00:15:44.572960 1 shared_informer.go:313] Waiting for caches to sync for node config
I0927 00:15:44.673419 1 shared_informer.go:320] Caches are synced for node config
I0927 00:15:44.673464 1 shared_informer.go:320] Caches are synced for service config
I0927 00:15:44.673479 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [dd123a1910a9] <==
W0927 00:15:35.354017 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0927 00:15:35.354029 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0927 00:15:35.354040 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
E0927 00:15:35.354042 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:35.354105 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:35.354122 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.270969 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0927 00:15:36.271010 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.382268 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:36.382308 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.468125 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:36.468168 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.501672 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0927 00:15:36.501721 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.501760 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0927 00:15:36.501796 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.535404 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0927 00:15:36.535454 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.542953 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0927 00:15:36.543001 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.590201 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0927 00:15:36.590468 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0927 00:15:36.728018 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0927 00:15:36.728073 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
I0927 00:15:38.451719 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Fri 2024-09-20 09:35:01 UTC, end at Fri 2024-09-27 00:27:01 UTC. --
Sep 27 00:26:48 ubuntu-20-agent-9 kubelet[128562]: I0927 00:26:48.789467 128562 scope.go:117] "RemoveContainer" containerID="ce0f95495465f6082ad910b695a398fd1abb55f85605cdea136b77cefb462fe2"
Sep 27 00:26:48 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:48.789649 128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zsrbc_gadget(8164a4a1-7793-4235-8a51-19ef23903995)\"" pod="gadget/gadget-zsrbc" podUID="8164a4a1-7793-4235-8a51-19ef23903995"
Sep 27 00:26:49 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:49.791351 128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="0a623e67-b01b-4dcd-b6c9-32493ac56396"
Sep 27 00:26:53 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:53.791527 128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="d20a513a-e4b3-49de-9860-4ea508ac296a"
Sep 27 00:26:57 ubuntu-20-agent-9 kubelet[128562]: I0927 00:26:57.790252 128562 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rbxpj" secret="" err="secret \"gcp-auth\" not found"
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573105 128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds\") pod \"0a623e67-b01b-4dcd-b6c9-32493ac56396\" (UID: \"0a623e67-b01b-4dcd-b6c9-32493ac56396\") "
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573158 128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66kzv\" (UniqueName: \"kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv\") pod \"0a623e67-b01b-4dcd-b6c9-32493ac56396\" (UID: \"0a623e67-b01b-4dcd-b6c9-32493ac56396\") "
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573189 128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0a623e67-b01b-4dcd-b6c9-32493ac56396" (UID: "0a623e67-b01b-4dcd-b6c9-32493ac56396"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573272 128562 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.575095 128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv" (OuterVolumeSpecName: "kube-api-access-66kzv") pod "0a623e67-b01b-4dcd-b6c9-32493ac56396" (UID: "0a623e67-b01b-4dcd-b6c9-32493ac56396"). InnerVolumeSpecName "kube-api-access-66kzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.675303 128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-66kzv\" (UniqueName: \"kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.977485 128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h578x\" (UniqueName: \"kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x\") pod \"32dd9391-b30e-4231-9d9e-8bd0457919d8\" (UID: \"32dd9391-b30e-4231-9d9e-8bd0457919d8\") "
Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.979841 128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x" (OuterVolumeSpecName: "kube-api-access-h578x") pod "32dd9391-b30e-4231-9d9e-8bd0457919d8" (UID: "32dd9391-b30e-4231-9d9e-8bd0457919d8"). InnerVolumeSpecName "kube-api-access-h578x". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.077830 128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnszc\" (UniqueName: \"kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc\") pod \"ae04301c-b1c9-4a19-af2e-04bc0071e797\" (UID: \"ae04301c-b1c9-4a19-af2e-04bc0071e797\") "
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.077901 128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h578x\" (UniqueName: \"kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.079736 128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc" (OuterVolumeSpecName: "kube-api-access-nnszc") pod "ae04301c-b1c9-4a19-af2e-04bc0071e797" (UID: "ae04301c-b1c9-4a19-af2e-04bc0071e797"). InnerVolumeSpecName "kube-api-access-nnszc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.178754 128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nnszc\" (UniqueName: \"kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.358391 128562 scope.go:117] "RemoveContainer" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.377973 128562 scope.go:117] "RemoveContainer" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: E0927 00:27:01.380177 128562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.380254 128562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"} err="failed to get container status \"8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.380298 128562 scope.go:117] "RemoveContainer" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.406186 128562 scope.go:117] "RemoveContainer" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: E0927 00:27:01.407411 128562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.407462 128562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"} err="failed to get container status \"c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9\": rpc error: code = Unknown desc = Error response from daemon: No such container: c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
==> storage-provisioner [04c3b6319c92] <==
I0927 00:15:44.736628 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0927 00:15:44.753330 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0927 00:15:44.753413 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0927 00:15:44.763082 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0927 00:15:44.763304 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c!
I0927 00:15:44.763427 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82539bb9-ad30-48bd-a0fd-ef0aafd10987", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c became leader
I0927 00:15:44.864472 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:
-- stdout --
Name: busybox
Namespace: default
Priority: 0
Service Account: default
Node: ubuntu-20-agent-9/10.154.0.4
Start Time: Fri, 27 Sep 2024 00:17:48 +0000
Labels: integration-test=busybox
Annotations: <none>
Status: Pending
IP: 10.244.0.25
IPs:
IP: 10.244.0.25
Containers:
busybox:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: this_is_fake
GCP_PROJECT: this_is_fake
GCLOUD_PROJECT: this_is_fake
GOOGLE_CLOUD_PROJECT: this_is_fake
CLOUDSDK_CORE_PROJECT: this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brwb6 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-brwb6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m14s default-scheduler Successfully assigned default/busybox to ubuntu-20-agent-9
Normal Pulling 7m42s (x4 over 9m13s) kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning Failed 7m42s (x4 over 9m13s) kubelet Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning Failed 7m42s (x4 over 9m13s) kubelet Error: ErrImagePull
Warning Failed 7m27s (x6 over 9m13s) kubelet Error: ImagePullBackOff
Normal BackOff 4m1s (x21 over 9m13s) kubelet Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.91s)