=== RUN TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 9.69671ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008391198s
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (59.755116ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.40245ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.545829ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.48278ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.079697ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (58.536636ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (55.674506ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (54.476274ms)
** stderr **
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (68.079713ms)
** stderr **
error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 4m57.900026939s
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (70.280932ms)
** stderr **
error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 5m55.526137141s
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (66.206146ms)
** stderr **
error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 7m16.909353382s
** /stderr **
addons_test.go:372: (dbg) Run: kubectl --context minikube top pods -n kube-system
addons_test.go:372: (dbg) Non-zero exit: kubectl --context minikube top pods -n kube-system: exit status 1 (66.886731ms)
** stderr **
error: Metrics not available for pod kube-system/coredns-565d847f94-hzdcg, age: 7m50.042439607s
** /stderr **
addons_test.go:386: failed checking metric server: exit status 1
addons_test.go:389: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.540857017s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------|----------|------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------|----------|------|---------|---------------------|---------------------|
| start | -o=json --download-only | minikube | root | v1.28.0 | 14 Jan 23 10:05 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| start | -o=json --download-only | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
| delete | -p minikube | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
| delete | -p minikube | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
| start | --download-only -p | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43039 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
| start | -p minikube --alsologtostderr | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:06 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | root | v1.28.0 | 14 Jan 23 10:06 UTC | 14 Jan 23 10:07 UTC |
| start | -p minikube --wait=true | minikube | root | v1.28.0 | 14 Jan 23 10:07 UTC | 14 Jan 23 10:07 UTC |
| | --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| ip | minikube ip | minikube | root | v1.28.0 | 14 Jan 23 10:08 UTC | 14 Jan 23 10:08 UTC |
| addons | minikube addons disable | minikube | root | v1.28.0 | 14 Jan 23 10:09 UTC | 14 Jan 23 10:09 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | minikube addons | minikube | root | v1.28.0 | 14 Jan 23 10:15 UTC | 14 Jan 23 10:15 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|--------------------------------|----------|------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 10:07:01
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.19.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 10:07:01.557042 16385 out.go:296] Setting OutFile to fd 1 ...
I0114 10:07:01.557159 16385 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
I0114 10:07:01.557170 16385 out.go:309] Setting ErrFile to fd 2...
I0114 10:07:01.557177 16385 out.go:343] TERM=unknown,COLORTERM=, which probably does not support color
I0114 10:07:01.557291 16385 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-3824/.minikube/bin
I0114 10:07:01.557744 16385 out.go:303] Setting JSON to false
I0114 10:07:01.558681 16385 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2969,"bootTime":1673687853,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0114 10:07:01.558750 16385 start.go:135] virtualization: kvm guest
I0114 10:07:01.561975 16385 out.go:177] * minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
W0114 10:07:01.563910 16385 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-3824/.minikube/cache/preloaded-tarball: no such file or directory
I0114 10:07:01.565554 16385 out.go:177] - MINIKUBE_LOCATION=15642
I0114 10:07:01.563993 16385 notify.go:220] Checking for updates...
I0114 10:07:01.569022 16385 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 10:07:01.570978 16385 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15642-3824/kubeconfig
I0114 10:07:01.572748 16385 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-3824/.minikube
I0114 10:07:01.574450 16385 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0114 10:07:01.576223 16385 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 10:07:01.577904 16385 out.go:177] * Using the none driver based on user configuration
I0114 10:07:01.579439 16385 start.go:294] selected driver: none
I0114 10:07:01.579467 16385 start.go:838] validating driver "none" against <nil>
I0114 10:07:01.579487 16385 start.go:849] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 10:07:01.579523 16385 start.go:1598] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0114 10:07:01.579948 16385 out.go:239] ! The 'none' driver does not respect the --memory flag
I0114 10:07:01.580617 16385 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0114 10:07:01.580850 16385 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0114 10:07:01.580889 16385 cni.go:95] Creating CNI manager for ""
I0114 10:07:01.580904 16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
I0114 10:07:01.580913 16385 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:
/var/run/socket_vmnet StaticIP:}
I0114 10:07:01.583846 16385 out.go:177] * Starting control plane node minikube in cluster minikube
I0114 10:07:01.585671 16385 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json ...
I0114 10:07:01.585709 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json: {Name:mkcb0f273917183e513823dd07fda69d303637e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:01.586018 16385 cache.go:193] Successfully downloaded all kic artifacts
I0114 10:07:01.586044 16385 start.go:364] acquiring machines lock for minikube: {Name:mk211048cabacb95867cd61d1afd712ed43b6718 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0114 10:07:01.586097 16385 start.go:368] acquired machines lock for "minikube" in 37.13µs
I0114 10:07:01.586112 16385 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name:m01 IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 10:07:01.586179 16385 start.go:125] createHost starting for "m01" (driver="none")
I0114 10:07:01.588178 16385 out.go:177] * Running on localhost (CPUs=8, Memory=32101MB, Disk=297540MB) ...
I0114 10:07:01.589937 16385 exec_runner.go:51] Run: systemctl --version
I0114 10:07:01.592369 16385 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0114 10:07:01.592412 16385 client.go:168] LocalClient.Create starting
I0114 10:07:01.592478 16385 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca.pem
I0114 10:07:01.592507 16385 main.go:134] libmachine: Decoding PEM data...
I0114 10:07:01.592522 16385 main.go:134] libmachine: Parsing certificate...
I0114 10:07:01.592571 16385 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-3824/.minikube/certs/cert.pem
I0114 10:07:01.592592 16385 main.go:134] libmachine: Decoding PEM data...
I0114 10:07:01.592603 16385 main.go:134] libmachine: Parsing certificate...
I0114 10:07:01.592920 16385 client.go:171] LocalClient.Create took 500.368µs
I0114 10:07:01.592944 16385 start.go:167] duration metric: libmachine.API.Create for "minikube" took 577.029µs
I0114 10:07:01.592950 16385 start.go:300] post-start starting for "minikube" (driver="none")
I0114 10:07:01.592979 16385 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0114 10:07:01.593009 16385 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0114 10:07:01.606636 16385 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0114 10:07:01.606665 16385 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0114 10:07:01.606675 16385 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0114 10:07:01.609437 16385 out.go:177] * OS release is Ubuntu 20.04.5 LTS
I0114 10:07:01.611020 16385 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3824/.minikube/addons for local assets ...
I0114 10:07:01.611081 16385 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-3824/.minikube/files for local assets ...
I0114 10:07:01.611103 16385 start.go:303] post-start completed in 18.146264ms
I0114 10:07:01.611655 16385 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/config.json ...
I0114 10:07:01.611770 16385 start.go:128] duration metric: createHost completed in 25.583846ms
I0114 10:07:01.611782 16385 start.go:83] releasing machines lock for "minikube", held for 25.674753ms
I0114 10:07:01.612066 16385 exec_runner.go:51] Run: cat /version.json
I0114 10:07:01.612227 16385 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0114 10:07:01.613044 16385 start.go:377] Unable to open version.json: cat /version.json: exit status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I0114 10:07:01.613161 16385 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0114 10:07:01.634572 16385 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0114 10:07:01.848644 16385 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0114 10:07:02.060449 16385 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0114 10:07:02.268186 16385 exec_runner.go:51] Run: sudo systemctl restart docker
I0114 10:07:02.492085 16385 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0114 10:07:02.704277 16385 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0114 10:07:02.911788 16385 exec_runner.go:51] Run: sudo systemctl start cri-docker.socket
I0114 10:07:02.927823 16385 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0114 10:07:02.927902 16385 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0114 10:07:02.929240 16385 start.go:472] Will wait 60s for crictl version
I0114 10:07:02.929279 16385 exec_runner.go:51] Run: which crictl
I0114 10:07:02.930223 16385 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0114 10:07:02.952849 16385 start.go:488] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.22
RuntimeApiVersion: 1.41.0
I0114 10:07:02.952908 16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0114 10:07:02.979235 16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0114 10:07:03.009113 16385 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.22 ...
I0114 10:07:03.009188 16385 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0114 10:07:03.012256 16385 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0114 10:07:03.013707 16385 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 10:07:03.013753 16385 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0114 10:07:03.117973 16385 cni.go:95] Creating CNI manager for ""
I0114 10:07:03.117996 16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
I0114 10:07:03.118013 16385 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0114 10:07:03.118034 16385 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.132.0.4 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.132.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.132.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0114 10:07:03.118222 16385 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.132.0.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "ubuntu-20-agent"
kubeletExtraArgs:
node-ip: 10.132.0.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.132.0.4"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0114 10:07:03.118333 16385 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=ubuntu-20-agent --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.132.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0114 10:07:03.118411 16385 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0114 10:07:03.128842 16385 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.3: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.25.3': No such file or directory
Initiating transfer...
I0114 10:07:03.128888 16385 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.3
I0114 10:07:03.146125 16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubeadm.sha256
I0114 10:07:03.146130 16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl.sha256
I0114 10:07:03.146182 16385 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubelet.sha256
I0114 10:07:03.146195 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubeadm --> /var/lib/minikube/binaries/v1.25.3/kubeadm (43802624 bytes)
I0114 10:07:03.146213 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubectl --> /var/lib/minikube/binaries/v1.25.3/kubectl (45015040 bytes)
I0114 10:07:03.146224 16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:07:03.159981 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/cache/linux/amd64/v1.25.3/kubelet --> /var/lib/minikube/binaries/v1.25.3/kubelet (114237464 bytes)
I0114 10:07:03.188858 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1593464049 /var/lib/minikube/binaries/v1.25.3/kubeadm
I0114 10:07:03.195394 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1408995124 /var/lib/minikube/binaries/v1.25.3/kubectl
I0114 10:07:03.260681 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3520884364 /var/lib/minikube/binaries/v1.25.3/kubelet
I0114 10:07:03.349875 16385 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0114 10:07:03.359389 16385 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0114 10:07:03.359409 16385 exec_runner.go:207] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0114 10:07:03.359474 16385 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (522 bytes)
I0114 10:07:03.359620 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2531836869 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0114 10:07:03.369553 16385 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0114 10:07:03.369580 16385 exec_runner.go:207] rm: /lib/systemd/system/kubelet.service
I0114 10:07:03.369644 16385 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0114 10:07:03.369811 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2705885136 /lib/systemd/system/kubelet.service
I0114 10:07:03.379894 16385 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2032 bytes)
I0114 10:07:03.380029 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2971766811 /var/tmp/minikube/kubeadm.yaml.new
I0114 10:07:03.389480 16385 exec_runner.go:51] Run: grep 10.132.0.4 control-plane.minikube.internal$ /etc/hosts
I0114 10:07:03.390812 16385 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube for IP: 10.132.0.4
I0114 10:07:03.390918 16385 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.key
I0114 10:07:03.390965 16385 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.key
I0114 10:07:03.391020 16385 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key
I0114 10:07:03.391034 16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt with IP's: []
I0114 10:07:03.536710 16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt ...
I0114 10:07:03.536748 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.crt: {Name:mk6343e22ba0ffe4e9d25050ad02a97f1f8618c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:03.536932 16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key ...
I0114 10:07:03.536946 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/client.key: {Name:mk63fb1e63b2117f858e0e7164ffdf4dba02353f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:03.537026 16385 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801
I0114 10:07:03.537040 16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 with IP's: [10.132.0.4 10.96.0.1 127.0.0.1 10.0.0.1]
I0114 10:07:03.839950 16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 ...
I0114 10:07:03.839985 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801: {Name:mke4088f7ffeb92284f3881ec7b5a89c34fa52c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:03.840158 16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801 ...
I0114 10:07:03.840170 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801: {Name:mkcec8d9064a9a0a0afd294395a4653a24f4fb5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:03.840239 16385 certs.go:320] copying /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt.13ebe801 -> /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt
I0114 10:07:03.840319 16385 certs.go:324] copying /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key.13ebe801 -> /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key
I0114 10:07:03.840367 16385 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key
I0114 10:07:03.840381 16385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0114 10:07:04.000794 16385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt ...
I0114 10:07:04.000827 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt: {Name:mkfdc9a5c41c36e17134ad349c5138c80e1983c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:04.001010 16385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key ...
I0114 10:07:04.001022 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key: {Name:mka8059030bef3a27e7eecff0328f9d74e3cab05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:04.001183 16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca-key.pem (1679 bytes)
I0114 10:07:04.001219 16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/ca.pem (1070 bytes)
I0114 10:07:04.001238 16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/cert.pem (1115 bytes)
I0114 10:07:04.001256 16385 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-3824/.minikube/certs/home/jenkins/minikube-integration/15642-3824/.minikube/certs/key.pem (1679 bytes)
I0114 10:07:04.001926 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0114 10:07:04.002050 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1033223229 /var/lib/minikube/certs/apiserver.crt
I0114 10:07:04.012848 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0114 10:07:04.012975 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube66064136 /var/lib/minikube/certs/apiserver.key
I0114 10:07:04.022814 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0114 10:07:04.022933 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3799989977 /var/lib/minikube/certs/proxy-client.crt
I0114 10:07:04.033853 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0114 10:07:04.033979 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3117975674 /var/lib/minikube/certs/proxy-client.key
I0114 10:07:04.045784 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0114 10:07:04.046005 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube668977885 /var/lib/minikube/certs/ca.crt
I0114 10:07:04.055947 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0114 10:07:04.056103 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2493247125 /var/lib/minikube/certs/ca.key
I0114 10:07:04.064867 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0114 10:07:04.065031 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube795335000 /var/lib/minikube/certs/proxy-client-ca.crt
I0114 10:07:04.075007 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0114 10:07:04.075134 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube295349317 /var/lib/minikube/certs/proxy-client-ca.key
I0114 10:07:04.085778 16385 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0114 10:07:04.085804 16385 exec_runner.go:207] rm: /usr/share/ca-certificates/minikubeCA.pem
I0114 10:07:04.085853 16385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-3824/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0114 10:07:04.085978 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube494919457 /usr/share/ca-certificates/minikubeCA.pem
I0114 10:07:04.095086 16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0114 10:07:04.095191 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1527100247 /var/lib/minikube/kubeconfig
I0114 10:07:04.105200 16385 exec_runner.go:51] Run: openssl version
I0114 10:07:04.108182 16385 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0114 10:07:04.118122 16385 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0114 10:07:04.119389 16385 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
I0114 10:07:04.119424 16385 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0114 10:07:04.122297 16385 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0114 10:07:04.131897 16385 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:10.132.0.4 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:07:04.132039 16385 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0114 10:07:04.153378 16385 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0114 10:07:04.163613 16385 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0114 10:07:04.174891 16385 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0114 10:07:04.202281 16385 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 10:07:04.212099 16385 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0114 10:07:04.212138 16385 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0114 10:07:04.250668 16385 kubeadm.go:317] W0114 10:07:04.250530 16883 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0114 10:07:04.254691 16385 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I0114 10:07:04.254720 16385 kubeadm.go:317] [preflight] Running pre-flight checks
I0114 10:07:04.367648 16385 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0114 10:07:04.367689 16385 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0114 10:07:04.367695 16385 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0114 10:07:04.367699 16385 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0114 10:07:04.419145 16385 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0114 10:07:04.422609 16385 out.go:204] - Generating certificates and keys ...
I0114 10:07:04.422661 16385 kubeadm.go:317] [certs] Using existing ca certificate authority
I0114 10:07:04.422679 16385 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0114 10:07:04.467512 16385 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0114 10:07:04.815164 16385 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0114 10:07:05.021711 16385 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0114 10:07:05.297265 16385 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0114 10:07:05.338077 16385 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0114 10:07:05.338176 16385 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
I0114 10:07:05.584635 16385 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0114 10:07:05.584664 16385 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent] and IPs [10.132.0.4 127.0.0.1 ::1]
I0114 10:07:05.642276 16385 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0114 10:07:06.099672 16385 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0114 10:07:06.173990 16385 kubeadm.go:317] [certs] Generating "sa" key and public key
I0114 10:07:06.174098 16385 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0114 10:07:06.218376 16385 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0114 10:07:06.371237 16385 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0114 10:07:06.531816 16385 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0114 10:07:06.655459 16385 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0114 10:07:06.676685 16385 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0114 10:07:06.678623 16385 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0114 10:07:06.678649 16385 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0114 10:07:06.896971 16385 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0114 10:07:06.899506 16385 out.go:204] - Booting up control plane ...
I0114 10:07:06.899539 16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0114 10:07:06.899894 16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0114 10:07:06.901234 16385 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0114 10:07:06.902223 16385 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0114 10:07:06.904356 16385 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0114 10:07:12.907069 16385 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002628 seconds
I0114 10:07:12.907099 16385 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0114 10:07:12.915833 16385 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0114 10:07:13.431157 16385 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I0114 10:07:13.431182 16385 kubeadm.go:317] [mark-control-plane] Marking the node ubuntu-20-agent as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0114 10:07:13.938395 16385 kubeadm.go:317] [bootstrap-token] Using token: vc8k2a.fqd42dsl4zvke0q4
I0114 10:07:13.940967 16385 out.go:204] - Configuring RBAC rules ...
I0114 10:07:13.941013 16385 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0114 10:07:13.943870 16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0114 10:07:13.950643 16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0114 10:07:13.952800 16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0114 10:07:13.954860 16385 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0114 10:07:13.956818 16385 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0114 10:07:13.964052 16385 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0114 10:07:14.279890 16385 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I0114 10:07:14.347604 16385 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I0114 10:07:14.348886 16385 kubeadm.go:317]
I0114 10:07:14.348907 16385 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I0114 10:07:14.348912 16385 kubeadm.go:317]
I0114 10:07:14.348916 16385 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I0114 10:07:14.348921 16385 kubeadm.go:317]
I0114 10:07:14.348925 16385 kubeadm.go:317] mkdir -p $HOME/.kube
I0114 10:07:14.348929 16385 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0114 10:07:14.348934 16385 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0114 10:07:14.348937 16385 kubeadm.go:317]
I0114 10:07:14.348942 16385 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I0114 10:07:14.348945 16385 kubeadm.go:317]
I0114 10:07:14.348950 16385 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I0114 10:07:14.348953 16385 kubeadm.go:317]
I0114 10:07:14.348957 16385 kubeadm.go:317] You should now deploy a pod network to the cluster.
I0114 10:07:14.348961 16385 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0114 10:07:14.348973 16385 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0114 10:07:14.348977 16385 kubeadm.go:317]
I0114 10:07:14.348982 16385 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I0114 10:07:14.348986 16385 kubeadm.go:317] and service account keys on each node and then running the following as root:
I0114 10:07:14.348990 16385 kubeadm.go:317]
I0114 10:07:14.348994 16385 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token vc8k2a.fqd42dsl4zvke0q4 \
I0114 10:07:14.348998 16385 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:ca5ae222565f0d80a07693c7c3b76e0f810307ec7292c767edf50f1957ddca19 \
I0114 10:07:14.349002 16385 kubeadm.go:317] --control-plane
I0114 10:07:14.349006 16385 kubeadm.go:317]
I0114 10:07:14.349009 16385 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I0114 10:07:14.349013 16385 kubeadm.go:317]
I0114 10:07:14.349017 16385 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token vc8k2a.fqd42dsl4zvke0q4 \
I0114 10:07:14.349021 16385 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:ca5ae222565f0d80a07693c7c3b76e0f810307ec7292c767edf50f1957ddca19
I0114 10:07:14.352315 16385 cni.go:95] Creating CNI manager for ""
I0114 10:07:14.352345 16385 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
I0114 10:07:14.352386 16385 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 10:07:14.352465 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:14.352483 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_01_14T10_07_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:14.365726 16385 ops.go:34] apiserver oom_adj: -16
I0114 10:07:14.453656 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:15.044388 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:15.543943 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:16.043977 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:16.544007 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:17.043878 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:17.543808 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:18.043918 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:18.544168 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:19.044024 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:19.543870 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:20.044055 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:20.544746 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:21.044721 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:21.544105 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:22.044645 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:22.544632 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:23.044146 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:23.544366 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:24.044111 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:24.544695 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:25.044047 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:25.544377 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:26.044053 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:26.544325 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:27.044612 16385 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0114 10:07:27.112549 16385 kubeadm.go:1067] duration metric: took 12.760146034s to wait for elevateKubeSystemPrivileges.
I0114 10:07:27.112581 16385 kubeadm.go:398] StartCluster complete in 22.980693946s
I0114 10:07:27.112601 16385 settings.go:142] acquiring lock: {Name:mk762d90acf41588a398ec2dea6bc8cf96f87602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:27.112692 16385 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15642-3824/kubeconfig
I0114 10:07:27.113371 16385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-3824/kubeconfig: {Name:mk2c87b79f2a73c5564b0710ce5c3222bf694f79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 10:07:27.628066 16385 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0114 10:07:27.630870 16385 out.go:177] * Configuring local host environment ...
I0114 10:07:27.628131 16385 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 10:07:27.628145 16385 addons.go:486] enableAddons start: toEnable=map[], additional=[registry metrics-server volumesnapshots csi-hostpath-driver gcp-auth cloud-spanner helm-tiller]
I0114 10:07:27.628364 16385 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
W0114 10:07:27.632611 16385 out.go:239] *
W0114 10:07:27.632636 16385 out.go:239] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0114 10:07:27.632646 16385 out.go:239] * Most users should use the newer 'docker' driver instead, which does not require root!
W0114 10:07:27.632656 16385 out.go:239] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0114 10:07:27.632664 16385 out.go:239] *
W0114 10:07:27.632814 16385 out.go:239] ! kubectl and minikube configuration will be stored in /home/jenkins
W0114 10:07:27.632832 16385 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0114 10:07:27.632842 16385 out.go:239] *
I0114 10:07:27.632868 16385 addons.go:65] Setting volumesnapshots=true in profile "minikube"
I0114 10:07:27.632895 16385 addons.go:227] Setting addon volumesnapshots=true in "minikube"
W0114 10:07:27.632908 16385 out.go:239] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0114 10:07:27.632922 16385 out.go:239] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0114 10:07:27.632932 16385 out.go:239] *
W0114 10:07:27.632939 16385 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0114 10:07:27.632950 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.632967 16385 start.go:212] Will wait 6m0s for node &{Name:m01 IP:10.132.0.4 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 10:07:27.634987 16385 out.go:177] * Verifying Kubernetes components...
I0114 10:07:27.633337 16385 addons.go:65] Setting gcp-auth=true in profile "minikube"
I0114 10:07:27.633343 16385 addons.go:65] Setting cloud-spanner=true in profile "minikube"
I0114 10:07:27.633340 16385 addons.go:65] Setting metrics-server=true in profile "minikube"
I0114 10:07:27.633347 16385 addons.go:65] Setting csi-hostpath-driver=true in profile "minikube"
I0114 10:07:27.633352 16385 addons.go:65] Setting default-storageclass=true in profile "minikube"
I0114 10:07:27.633354 16385 addons.go:65] Setting helm-tiller=true in profile "minikube"
I0114 10:07:27.633361 16385 addons.go:65] Setting registry=true in profile "minikube"
I0114 10:07:27.633369 16385 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I0114 10:07:27.633709 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.637157 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.637201 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.637933 16385 addons.go:227] Setting addon csi-hostpath-driver=true in "minikube"
I0114 10:07:27.637975 16385 mustload.go:65] Loading cluster: minikube
I0114 10:07:27.638003 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.638083 16385 addons.go:227] Setting addon cloud-spanner=true in "minikube"
I0114 10:07:27.638130 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.638208 16385 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:07:27.638251 16385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0114 10:07:27.638406 16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:07:27.638688 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.638709 16385 addons.go:227] Setting addon registry=true in "minikube"
I0114 10:07:27.638710 16385 addons.go:227] Setting addon metrics-server=true in "minikube"
I0114 10:07:27.638724 16385 addons.go:227] Setting addon storage-provisioner=true in "minikube"
W0114 10:07:27.638732 16385 addons.go:236] addon storage-provisioner should already be in state true
I0114 10:07:27.638742 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.638749 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.638755 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.638769 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.638789 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.638826 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.638926 16385 addons.go:227] Setting addon helm-tiller=true in "minikube"
I0114 10:07:27.638977 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.639316 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.639329 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.639335 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639338 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639349 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.638712 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639374 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639391 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.639402 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.639434 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.639453 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639362 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.638689 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.639495 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639363 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.639512 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.638688 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.639582 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.639604 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.639479 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.656762 16385 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent" to be "Ready" ...
I0114 10:07:27.660679 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.660727 16385 node_ready.go:49] node "ubuntu-20-agent" has status "Ready":"True"
I0114 10:07:27.660741 16385 node_ready.go:38] duration metric: took 3.939345ms waiting for node "ubuntu-20-agent" to be "Ready" ...
I0114 10:07:27.660751 16385 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:07:27.661027 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.662594 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.663405 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.675779 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.676017 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.676277 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.683250 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.683320 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.683481 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.683515 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.685064 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.685108 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.686804 16385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-hzdcg" in "kube-system" namespace to be "Ready" ...
I0114 10:07:27.697784 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.697857 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.699442 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.699503 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.714758 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.715265 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.721767 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.721801 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.723843 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.723941 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.726834 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.726891 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.727240 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.727390 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.727550 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.727567 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.729902 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.729924 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.734855 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.736233 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.740734 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
I0114 10:07:27.738541 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.738872 16385 addons.go:227] Setting addon default-storageclass=true in "minikube"
I0114 10:07:27.738914 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.747833 16385 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.2
I0114 10:07:27.744162 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.744228 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
W0114 10:07:27.744241 16385 addons.go:236] addon default-storageclass should already be in state true
I0114 10:07:27.751321 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.751391 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.751816 16385 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0114 10:07:27.751845 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0114 10:07:27.751981 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2358514389 /etc/kubernetes/addons/metrics-apiservice.yaml
I0114 10:07:27.752145 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0114 10:07:27.752246 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2294294542 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0114 10:07:27.754170 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:27.754851 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:27.754869 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:27.754902 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:27.756882 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.756913 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.757082 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.757125 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.759892 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.762890 16385 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 10:07:27.765440 16385 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:07:27.765470 16385 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0114 10:07:27.765485 16385 exec_runner.go:207] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:07:27.765654 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 10:07:27.765789 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2149253070 /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:07:27.762644 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.766377 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.763517 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.763658 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.766630 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.772427 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
I0114 10:07:27.773146 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.775029 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.773331 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.779936 16385 out.go:177] - Using image docker.io/registry:2.8.1
I0114 10:07:27.777995 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
I0114 10:07:27.779845 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.783485 16385 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0114 10:07:27.785409 16385 out.go:177] - Using image gcr.io/google_containers/kube-registry-proxy:0.4
I0114 10:07:27.787570 16385 addons.go:419] installing /etc/kubernetes/addons/registry-rc.yaml
I0114 10:07:27.787607 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
I0114 10:07:27.787704 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3017411789 /etc/kubernetes/addons/registry-rc.yaml
I0114 10:07:27.785614 16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0114 10:07:27.787865 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0114 10:07:27.790509 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
I0114 10:07:27.787981 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube520640777 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0114 10:07:27.788171 16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0114 10:07:27.792298 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0114 10:07:27.792455 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3813655867 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0114 10:07:27.795954 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
I0114 10:07:27.793408 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.794432 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 10:07:27.796387 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:27.798139 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.800349 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
I0114 10:07:27.805280 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
I0114 10:07:27.809465 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
I0114 10:07:27.807997 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.814637 16385 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.4.8
I0114 10:07:27.817513 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
I0114 10:07:27.819892 16385 out.go:177] - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
I0114 10:07:27.817479 16385 addons.go:419] installing /etc/kubernetes/addons/deployment.yaml
I0114 10:07:27.818550 16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0114 10:07:27.819054 16385 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0114 10:07:27.821365 16385 addons.go:419] installing /etc/kubernetes/addons/registry-svc.yaml
I0114 10:07:27.822304 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0114 10:07:27.822417 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
I0114 10:07:27.822431 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3257268722 /etc/kubernetes/addons/registry-svc.yaml
I0114 10:07:27.822445 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0114 10:07:27.822464 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0114 10:07:27.822545 16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0114 10:07:27.822549 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3716108104 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0114 10:07:27.822553 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4257305772 /etc/kubernetes/addons/deployment.yaml
I0114 10:07:27.822564 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0114 10:07:27.822577 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0114 10:07:27.822648 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050701391 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0114 10:07:27.822684 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2295067184 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0114 10:07:27.824911 16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0114 10:07:27.824938 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0114 10:07:27.825035 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube46951008 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0114 10:07:27.833617 16385 addons.go:419] installing /etc/kubernetes/addons/registry-proxy.yaml
I0114 10:07:27.833733 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
I0114 10:07:27.833906 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1716086082 /etc/kubernetes/addons/registry-proxy.yaml
I0114 10:07:27.834087 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
I0114 10:07:27.834120 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
I0114 10:07:27.834281 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3106661543 /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
I0114 10:07:27.836684 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:27.836767 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:27.837002 16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0114 10:07:27.837029 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0114 10:07:27.837128 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4055652415 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0114 10:07:27.837321 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0114 10:07:27.842093 16385 addons.go:419] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0114 10:07:27.842118 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0114 10:07:27.842220 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4233165337 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0114 10:07:27.848175 16385 addons.go:419] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0114 10:07:27.848210 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0114 10:07:27.848328 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2093681031 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0114 10:07:27.853664 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0114 10:07:27.853701 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
I0114 10:07:27.853819 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2763555223 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0114 10:07:27.855496 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:27.855526 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:27.857493 16385 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0114 10:07:27.857526 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0114 10:07:27.857665 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube364474976 /etc/kubernetes/addons/metrics-server-service.yaml
I0114 10:07:27.859641 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0114 10:07:27.863907 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:27.864005 16385 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 10:07:27.864021 16385 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0114 10:07:27.864028 16385 exec_runner.go:207] rm: /etc/kubernetes/addons/storageclass.yaml
I0114 10:07:27.864091 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 10:07:27.864202 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4091563180 /etc/kubernetes/addons/storageclass.yaml
I0114 10:07:27.864434 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0114 10:07:27.864454 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
I0114 10:07:27.864553 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3624240237 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0114 10:07:27.866694 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0114 10:07:27.883173 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 10:07:27.885442 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0114 10:07:27.885479 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
I0114 10:07:27.886105 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1517598696 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0114 10:07:27.894678 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0114 10:07:27.903981 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0114 10:07:27.904020 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
I0114 10:07:27.904147 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1357345172 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0114 10:07:27.922436 16385 addons.go:419] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0114 10:07:27.922471 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
I0114 10:07:27.922599 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1658521713 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0114 10:07:27.928404 16385 addons.go:419] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0114 10:07:27.928437 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
I0114 10:07:27.928531 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1677821458 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0114 10:07:27.970777 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0114 10:07:27.973303 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0114 10:07:27.973339 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
I0114 10:07:27.973459 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2193996750 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0114 10:07:28.029076 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0114 10:07:28.029113 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
I0114 10:07:28.029240 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3161982828 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0114 10:07:28.066112 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0114 10:07:28.066142 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
I0114 10:07:28.066259 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3166253901 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0114 10:07:28.105698 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
I0114 10:07:28.105744 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
I0114 10:07:28.105885 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2629367358 /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
I0114 10:07:28.131811 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0114 10:07:28.131857 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
I0114 10:07:28.132004 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3447136068 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0114 10:07:28.153733 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
I0114 10:07:28.153812 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
I0114 10:07:28.153943 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2767193131 /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
I0114 10:07:28.175265 16385 addons.go:419] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0114 10:07:28.175302 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0114 10:07:28.175420 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2955231695 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0114 10:07:28.188823 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0114 10:07:28.741108 16385 start.go:833] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS
I0114 10:07:28.870354 16385 addons.go:457] Verifying addon metrics-server=true in "minikube"
I0114 10:07:28.914573 16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.054884841s)
I0114 10:07:28.914607 16385 addons.go:457] Verifying addon registry=true in "minikube"
I0114 10:07:28.917026 16385 out.go:177] * Verifying registry addon...
I0114 10:07:28.920003 16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0114 10:07:28.924387 16385 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
I0114 10:07:28.924415 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:28.935065 16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.068328174s)
I0114 10:07:28.990728 16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.01989352s)
W0114 10:07:28.990771 16385 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0114 10:07:28.990791 16385 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0114 10:07:29.241006 16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.05211434s)
I0114 10:07:29.241043 16385 addons.go:457] Verifying addon csi-hostpath-driver=true in "minikube"
I0114 10:07:29.243728 16385 out.go:177] * Verifying csi-hostpath-driver addon...
I0114 10:07:29.246723 16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0114 10:07:29.250477 16385 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0114 10:07:29.250496 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:29.267706 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0114 10:07:29.428637 16385 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0114 10:07:29.428657 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:29.698824 16385 pod_ready.go:102] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"False"
I0114 10:07:29.756664 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:29.929597 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:30.255526 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:30.432860 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:30.757693 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:30.929971 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:31.261020 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:31.429887 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:31.700385 16385 pod_ready.go:102] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"False"
I0114 10:07:31.756665 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:31.930089 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:32.070822 16385 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.802992171s)
I0114 10:07:32.256483 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:32.430438 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:32.698915 16385 pod_ready.go:92] pod "coredns-565d847f94-hzdcg" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:32.698939 16385 pod_ready.go:81] duration metric: took 5.012115453s waiting for pod "coredns-565d847f94-hzdcg" in "kube-system" namespace to be "Ready" ...
I0114 10:07:32.698956 16385 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-j4qdt" in "kube-system" namespace to be "Ready" ...
I0114 10:07:32.756361 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:32.929234 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:33.255831 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:33.429210 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:33.757099 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:33.929254 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:34.209297 16385 pod_ready.go:92] pod "coredns-565d847f94-j4qdt" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.209329 16385 pod_ready.go:81] duration metric: took 1.51036685s waiting for pod "coredns-565d847f94-j4qdt" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.209343 16385 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.214090 16385 pod_ready.go:92] pod "etcd-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.214114 16385 pod_ready.go:81] duration metric: took 4.763853ms waiting for pod "etcd-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.214127 16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.219017 16385 pod_ready.go:92] pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.219040 16385 pod_ready.go:81] duration metric: took 4.905219ms waiting for pod "kube-apiserver-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.219052 16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.223730 16385 pod_ready.go:92] pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.223752 16385 pod_ready.go:81] duration metric: took 4.692428ms waiting for pod "kube-controller-manager-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.223764 16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kg2xf" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.255805 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:34.296428 16385 pod_ready.go:92] pod "kube-proxy-kg2xf" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.296457 16385 pod_ready.go:81] duration metric: took 72.684129ms waiting for pod "kube-proxy-kg2xf" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.296471 16385 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.360171 16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0114 10:07:34.360302 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2411787160 /var/lib/minikube/google_application_credentials.json
I0114 10:07:34.372980 16385 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0114 10:07:34.373117 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2944672869 /var/lib/minikube/google_cloud_project
I0114 10:07:34.386372 16385 addons.go:227] Setting addon gcp-auth=true in "minikube"
I0114 10:07:34.386487 16385 host.go:66] Checking if "minikube" exists ...
I0114 10:07:34.387039 16385 kubeconfig.go:92] found "minikube" server: "https://10.132.0.4:8443"
I0114 10:07:34.387057 16385 api_server.go:165] Checking apiserver status ...
I0114 10:07:34.387081 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:34.409026 16385 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/17460/cgroup
I0114 10:07:34.421060 16385 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b"
I0114 10:07:34.421114 16385 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd8bf235cc15ffdde47bc19857d584c74/2ff1699b4878381bbbffd51fc70f8184d64c76643835e528bc633a8f869f9e2b/freezer.state
I0114 10:07:34.429293 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:34.430803 16385 api_server.go:203] freezer state: "THAWED"
I0114 10:07:34.430830 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:34.435220 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:34.435273 16385 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0114 10:07:34.438478 16385 out.go:177] - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
I0114 10:07:34.440038 16385 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.13
I0114 10:07:34.441577 16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0114 10:07:34.441611 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0114 10:07:34.441886 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube571050405 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0114 10:07:34.454552 16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0114 10:07:34.454584 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0114 10:07:34.454672 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3303134801 /etc/kubernetes/addons/gcp-auth-service.yaml
I0114 10:07:34.465184 16385 addons.go:419] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0114 10:07:34.465217 16385 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5393 bytes)
I0114 10:07:34.465335 16385 exec_runner.go:51] Run: sudo cp -a /tmp/minikube990606094 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0114 10:07:34.477202 16385 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0114 10:07:34.696473 16385 pod_ready.go:92] pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace has status "Ready":"True"
I0114 10:07:34.696495 16385 pod_ready.go:81] duration metric: took 400.017786ms waiting for pod "kube-scheduler-ubuntu-20-agent" in "kube-system" namespace to be "Ready" ...
I0114 10:07:34.696504 16385 pod_ready.go:38] duration metric: took 7.035739929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 10:07:34.696527 16385 api_server.go:51] waiting for apiserver process to appear ...
I0114 10:07:34.696572 16385 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:07:34.734100 16385 api_server.go:71] duration metric: took 7.101094027s to wait for apiserver process to appear ...
I0114 10:07:34.734128 16385 api_server.go:87] waiting for apiserver healthz status ...
I0114 10:07:34.734142 16385 api_server.go:252] Checking apiserver healthz at https://10.132.0.4:8443/healthz ...
I0114 10:07:34.738480 16385 api_server.go:278] https://10.132.0.4:8443/healthz returned 200:
ok
I0114 10:07:34.739240 16385 api_server.go:140] control plane version: v1.25.3
I0114 10:07:34.739265 16385 api_server.go:130] duration metric: took 5.131151ms to wait for apiserver health ...
I0114 10:07:34.739275 16385 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 10:07:34.755735 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:34.902599 16385 system_pods.go:59] 19 kube-system pods found
I0114 10:07:34.902639 16385 system_pods.go:61] "coredns-565d847f94-hzdcg" [e29d84f5-82a6-47c0-b832-601c7c0781a9] Running
I0114 10:07:34.902647 16385 system_pods.go:61] "coredns-565d847f94-j4qdt" [235e6f77-d3f1-4391-ad58-df166f26d492] Running
I0114 10:07:34.902654 16385 system_pods.go:61] "csi-hostpath-attacher-0" [5bd68643-6771-472d-99e4-4015cd983d36] Pending
I0114 10:07:34.902662 16385 system_pods.go:61] "csi-hostpath-provisioner-0" [73cb9a92-49ec-4a1a-884d-a2cbb3f3542d] Pending
I0114 10:07:34.902669 16385 system_pods.go:61] "csi-hostpath-resizer-0" [2c1884b8-314c-4b4c-a2dc-1d9186cf0792] Pending
I0114 10:07:34.902676 16385 system_pods.go:61] "csi-hostpath-snapshotter-0" [48f2bfa7-7661-45de-ac6f-19f41e393d0d] Pending
I0114 10:07:34.902683 16385 system_pods.go:61] "csi-hostpathplugin-0" [7a9ea40d-11af-40dc-800a-213c03c35ebc] Pending
I0114 10:07:34.902695 16385 system_pods.go:61] "etcd-ubuntu-20-agent" [a1b7d9bb-31d0-441d-8f46-0aa17e6541f1] Running
I0114 10:07:34.902707 16385 system_pods.go:61] "kube-apiserver-ubuntu-20-agent" [36ae1b47-772b-425a-b340-0a9b32861e7d] Running
I0114 10:07:34.902721 16385 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent" [df519fda-9fe2-47c1-83cf-17df66f0fb3e] Running
I0114 10:07:34.902728 16385 system_pods.go:61] "kube-proxy-kg2xf" [26fe60cf-f9db-4fdd-af89-776e4ede4748] Running
I0114 10:07:34.902738 16385 system_pods.go:61] "kube-scheduler-ubuntu-20-agent" [93aa1003-d1c5-4b8b-826f-83be5d5d2f29] Running
I0114 10:07:34.902754 16385 system_pods.go:61] "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0114 10:07:34.902766 16385 system_pods.go:61] "registry-kq4cd" [32b72e54-cd00-412d-9956-c5373a71c06c] Pending
I0114 10:07:34.902776 16385 system_pods.go:61] "registry-proxy-s9fw7" [1ad6757d-2230-4f49-bb63-c55e4bf5d78b] Pending
I0114 10:07:34.902787 16385 system_pods.go:61] "snapshot-controller-67c8f9659-hb5bx" [6d8b5c82-bb84-4599-9b84-b8dc330fdb73] Pending
I0114 10:07:34.902795 16385 system_pods.go:61] "snapshot-controller-67c8f9659-lcxlj" [44d0f1e8-c929-4513-9907-e019af13d5bd] Pending
I0114 10:07:34.902809 16385 system_pods.go:61] "storage-provisioner" [f313a32c-e6d4-45f9-a444-4fc747ab9a81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0114 10:07:34.902820 16385 system_pods.go:61] "tiller-deploy-696b5bfbb7-pg8sd" [930aa4f6-25af-4b84-9939-c484716e2fdf] Pending
I0114 10:07:34.902832 16385 system_pods.go:74] duration metric: took 163.54964ms to wait for pod list to return data ...
I0114 10:07:34.902845 16385 default_sa.go:34] waiting for default service account to be created ...
I0114 10:07:34.928353 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:35.096651 16385 default_sa.go:45] found service account: "default"
I0114 10:07:35.096674 16385 default_sa.go:55] duration metric: took 193.820503ms for default service account to be created ...
I0114 10:07:35.096682 16385 system_pods.go:116] waiting for k8s-apps to be running ...
I0114 10:07:35.213496 16385 addons.go:457] Verifying addon gcp-auth=true in "minikube"
I0114 10:07:35.216418 16385 out.go:177] * Verifying gcp-auth addon...
I0114 10:07:35.218699 16385 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0114 10:07:35.220953 16385 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0114 10:07:35.220971 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:35.255307 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:35.302337 16385 system_pods.go:86] 19 kube-system pods found
I0114 10:07:35.302372 16385 system_pods.go:89] "coredns-565d847f94-hzdcg" [e29d84f5-82a6-47c0-b832-601c7c0781a9] Running
I0114 10:07:35.302381 16385 system_pods.go:89] "coredns-565d847f94-j4qdt" [235e6f77-d3f1-4391-ad58-df166f26d492] Running
I0114 10:07:35.302388 16385 system_pods.go:89] "csi-hostpath-attacher-0" [5bd68643-6771-472d-99e4-4015cd983d36] Pending
I0114 10:07:35.302394 16385 system_pods.go:89] "csi-hostpath-provisioner-0" [73cb9a92-49ec-4a1a-884d-a2cbb3f3542d] Pending
I0114 10:07:35.302400 16385 system_pods.go:89] "csi-hostpath-resizer-0" [2c1884b8-314c-4b4c-a2dc-1d9186cf0792] Pending
I0114 10:07:35.302407 16385 system_pods.go:89] "csi-hostpath-snapshotter-0" [48f2bfa7-7661-45de-ac6f-19f41e393d0d] Pending
I0114 10:07:35.302416 16385 system_pods.go:89] "csi-hostpathplugin-0" [7a9ea40d-11af-40dc-800a-213c03c35ebc] Pending
I0114 10:07:35.302427 16385 system_pods.go:89] "etcd-ubuntu-20-agent" [a1b7d9bb-31d0-441d-8f46-0aa17e6541f1] Running
I0114 10:07:35.302438 16385 system_pods.go:89] "kube-apiserver-ubuntu-20-agent" [36ae1b47-772b-425a-b340-0a9b32861e7d] Running
I0114 10:07:35.302449 16385 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent" [df519fda-9fe2-47c1-83cf-17df66f0fb3e] Running
I0114 10:07:35.302463 16385 system_pods.go:89] "kube-proxy-kg2xf" [26fe60cf-f9db-4fdd-af89-776e4ede4748] Running
I0114 10:07:35.302476 16385 system_pods.go:89] "kube-scheduler-ubuntu-20-agent" [93aa1003-d1c5-4b8b-826f-83be5d5d2f29] Running
I0114 10:07:35.302486 16385 system_pods.go:89] "metrics-server-56c6cfbdd9-tg5kv" [99b244b0-02bb-4d7b-8b98-f38c99f1949e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0114 10:07:35.302492 16385 system_pods.go:89] "registry-kq4cd" [32b72e54-cd00-412d-9956-c5373a71c06c] Pending
I0114 10:07:35.302498 16385 system_pods.go:89] "registry-proxy-s9fw7" [1ad6757d-2230-4f49-bb63-c55e4bf5d78b] Pending
I0114 10:07:35.302502 16385 system_pods.go:89] "snapshot-controller-67c8f9659-hb5bx" [6d8b5c82-bb84-4599-9b84-b8dc330fdb73] Pending
I0114 10:07:35.302508 16385 system_pods.go:89] "snapshot-controller-67c8f9659-lcxlj" [44d0f1e8-c929-4513-9907-e019af13d5bd] Pending
I0114 10:07:35.302518 16385 system_pods.go:89] "storage-provisioner" [f313a32c-e6d4-45f9-a444-4fc747ab9a81] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0114 10:07:35.302532 16385 system_pods.go:89] "tiller-deploy-696b5bfbb7-pg8sd" [930aa4f6-25af-4b84-9939-c484716e2fdf] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0114 10:07:35.302546 16385 system_pods.go:126] duration metric: took 205.857605ms to wait for k8s-apps to be running ...
I0114 10:07:35.302559 16385 system_svc.go:44] waiting for kubelet service to be running ....
I0114 10:07:35.302605 16385 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0114 10:07:35.316994 16385 system_svc.go:56] duration metric: took 14.427221ms WaitForService to wait for kubelet.
I0114 10:07:35.317025 16385 kubeadm.go:573] duration metric: took 7.684025055s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0114 10:07:35.317046 16385 node_conditions.go:102] verifying NodePressure condition ...
I0114 10:07:35.429078 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:35.496431 16385 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0114 10:07:35.496456 16385 node_conditions.go:123] node cpu capacity is 8
I0114 10:07:35.496467 16385 node_conditions.go:105] duration metric: took 179.416679ms to run NodePressure ...
I0114 10:07:35.496477 16385 start.go:217] waiting for startup goroutines ...
I0114 10:07:35.724768 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:35.756726 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:35.929596 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:36.224285 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:36.256459 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:36.430727 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:36.724426 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:36.756363 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:36.928511 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:37.224237 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:37.256772 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:37.429358 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:37.724106 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:37.756451 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:37.929109 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:38.224112 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:38.256477 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:38.429415 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:38.724147 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:38.756568 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:38.930164 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:39.224480 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:39.256145 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:39.429875 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:39.725045 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:39.755866 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:39.929029 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:40.225162 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:40.256740 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:40.429800 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:40.724644 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:40.757804 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:40.929608 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:41.224266 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:41.257073 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:41.429204 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:41.724805 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:41.756623 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:41.929379 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:42.225520 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:42.257213 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:42.428752 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:42.724549 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:42.756587 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:42.929056 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:43.225074 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:43.255555 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:43.429191 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:43.725582 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:43.756443 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:43.929704 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:44.224148 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:44.255473 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:44.428426 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:44.724342 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:44.757147 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:44.929166 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0114 10:07:45.243826 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:45.256904 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:45.428901 16385 kapi.go:108] duration metric: took 16.508901147s to wait for kubernetes.io/minikube-addons=registry ...
I0114 10:07:45.724780 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:45.756106 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:46.224657 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:46.256949 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:46.724477 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:46.756412 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:47.224364 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:47.256353 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:47.724307 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:47.756867 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:48.224715 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:48.256985 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:48.723804 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:48.755449 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:49.225145 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:49.256290 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:49.725734 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:49.756537 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:50.224800 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:50.256423 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:50.724781 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:50.758484 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:51.224670 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:51.256456 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:51.724731 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:51.780745 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:52.224532 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:52.257468 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:52.724160 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:52.756568 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:53.224226 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:53.255999 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:53.724090 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:53.755531 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:54.224741 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:54.256401 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:54.724967 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:54.756876 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:55.224543 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:55.256042 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:55.724413 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0114 10:07:55.755981 16385 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0114 10:07:56.224062 16385 kapi.go:108] duration metric: took 21.005361557s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0114 10:07:56.226144 16385 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0114 10:07:56.227762 16385 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0114 10:07:56.229138 16385 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0114 10:07:56.255864 16385 kapi.go:108] duration metric: took 27.009137513s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0114 10:07:56.258198 16385 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, metrics-server, helm-tiller, volumesnapshots, registry, gcp-auth, csi-hostpath-driver
I0114 10:07:56.259730 16385 addons.go:488] enableAddons completed in 28.631581802s
I0114 10:07:56.260052 16385 exec_runner.go:51] Run: rm -f paused
I0114 10:07:56.306675 16385 start.go:536] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1)
I0114 10:07:56.309012 16385 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Mon 2022-12-12 17:50:41 UTC, end at Sat 2023-01-14 10:15:18 UTC. --
Jan 14 10:07:36 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:36.256064573Z" level=warning msg="reference for unknown type: " digest="sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f" remote="ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f"
Jan 14 10:07:37 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:37.907191908Z" level=warning msg="reference for unknown type: " digest="sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4" remote="k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4"
Jan 14 10:07:38 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:38.985604668Z" level=warning msg="reference for unknown type: " digest="sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da" remote="gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da"
Jan 14 10:07:42 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:42.810434841Z" level=warning msg="reference for unknown type: " digest="sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2" remote="k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2"
Jan 14 10:07:44 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:44.100372898Z" level=warning msg="reference for unknown type: " digest="sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a" remote="k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a"
Jan 14 10:07:45 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:45.178009545Z" level=warning msg="reference for unknown type: " digest="sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02"
Jan 14 10:07:46 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:46.274383462Z" level=warning msg="reference for unknown type: " digest="sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782" remote="k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782"
Jan 14 10:07:47 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:47.314940991Z" level=warning msg="reference for unknown type: " digest="sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09" remote="k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09"
Jan 14 10:07:48 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:48.405607389Z" level=warning msg="reference for unknown type: " digest="sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068" remote="k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068"
Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.578029838Z" level=warning msg="reference for unknown type: " digest="sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16" remote="k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16"
Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.712333969Z" level=info msg="ignoring event" container=c15ad6fc83da60d7d36b5955dd91389b972444f8eae14dc902b5c5ae44529eca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:07:49 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:49.731251418Z" level=info msg="ignoring event" container=ec724690f5f37870fc8571365a6c9a3c73b06368d273c24009905760d2b6f68b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:07:50 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:50.703200543Z" level=warning msg="reference for unknown type: " digest="sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108" remote="k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108"
Jan 14 10:07:50 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:50.854919320Z" level=info msg="ignoring event" container=60e7e632ec335e4dbcd63c5ba412e34e4564a68df9a1583180cbd638ab0704f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:07:51 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:51.670272266Z" level=warning msg="reference for unknown type: " digest="sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659" remote="k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659"
Jan 14 10:07:51 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:51.819871487Z" level=info msg="ignoring event" container=9404ef703f1dcf0fcf5bd0e16eb444d48c2350b6889a3b2fcab22362fc4aa399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:07:52 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:52.623900374Z" level=warning msg="reference for unknown type: " digest="sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40"
Jan 14 10:07:52 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:52.847652357Z" level=info msg="ignoring event" container=f04866c0dcb3f1c7f8d5cdc9744a46e968c9c1b029a679ebcb90f76e1643abb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:07:54 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:07:54.918937924Z" level=warning msg="reference for unknown type: " digest="sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994" remote="k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994"
Jan 14 10:08:08 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:08:08.518949215Z" level=info msg="ignoring event" container=05a012ae497d60bf6678ca4b3ce8d19bf52d5a1369b6059c45d5036b8b15fc9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:08:10 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:08:10.210852515Z" level=info msg="ignoring event" container=aa2a19fc7fee5562e11db743ae10464bc9bfb3524cd29be285ae35d79d9fd61a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.236167749Z" level=info msg="ignoring event" container=7e5fd0e68c151c52ee70cce243d5fdfb2b9d8e8ed19a413b814615d1da93e0a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.294169536Z" level=info msg="ignoring event" container=9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.294212513Z" level=info msg="ignoring event" container=3ca1b6a5c960498f4a5451b809daed5f7178d1f94002230b2f37f02c63271f6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:09:56 ubuntu-20-agent dockerd[16499]: time="2023-01-14T10:09:56.356260464Z" level=info msg="ignoring event" container=10cdee3dbdce5a4c43d46ba103e0a1e70e3b65cfd77d0d1710513085970ec2d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e0c7cbfe335c2 k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 7 minutes ago Running liveness-probe 0 54faa7460c909
01800d02aa2ea gcr.io/k8s-minikube/gcp-auth-webhook@sha256:08a49cb7a588d81723b7e02c16082c75418b6e0a54cf2e44668bd77f79a41a40 7 minutes ago Running gcp-auth 0 a36a66dac7f44
6e39e5cd9d78d k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659 7 minutes ago Running hostpath 0 54faa7460c909
3bdbccbf8a29e k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 7 minutes ago Running node-driver-registrar 0 54faa7460c909
39188bd328071 k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16 7 minutes ago Running csi-external-health-monitor-controller 0 54faa7460c909
458bdb236a928 k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 7 minutes ago Running csi-attacher 0 2dd0b15cfe812
88caf00d90a3d k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 7 minutes ago Running csi-snapshotter 0 64fff840b7280
e998541c1561f k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02 7 minutes ago Running csi-external-health-monitor-agent 0 54faa7460c909
a2763077d5802 k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a 7 minutes ago Running csi-resizer 0 cdcce0e103ed2
ccefe34ac25b0 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 e4990db1461fa
f2482d6749a30 k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 7 minutes ago Running csi-provisioner 0 927beeae93c54
11fa881780fb0 k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 7 minutes ago Running volume-snapshot-controller 0 a2c2d6bba2256
ef11c381189ff ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 7 minutes ago Running tiller 0 93efd01a37d26
b508bdd8c8273 gcr.io/cloud-spanner-emulator/emulator@sha256:5469945589399bd79ead8bed929f5eb4d1c5ee98d095df5b0ebe35f0b7160a84 7 minutes ago Running cloud-spanner-emulator 0 1928dbcf7a541
9be7dfbede6e8 registry.k8s.io/metrics-server/metrics-server@sha256:f977ad859fb500c1302d9c3428c6271db031bb7431e7076213b676b345a88dc2 7 minutes ago Exited metrics-server 0 d506ee694ccd4
7c5cbae47eb40 6e38f40d628db 7 minutes ago Running storage-provisioner 0 7b54ada7bbbfd
cc9f535a05271 5185b96f0becf 7 minutes ago Running coredns 0 01847f791815e
4f53bf8a83055 beaaf00edd38a 7 minutes ago Running kube-proxy 0 73299f5498088
ab6d08dc2097b a8a176a5d5d69 8 minutes ago Running etcd 30 99cf335a33bbb
2ff1699b48783 0346dbd74bcb9 8 minutes ago Running kube-apiserver 0 06e63285ce53c
604a7cca50ac7 6039992312758 8 minutes ago Running kube-controller-manager 35 fcfe8b131688f
9fb1ebe94b48a 6d23ec0e8b87e 8 minutes ago Running kube-scheduler 31 28a5d33e0cb63
*
* ==> coredns [cc9f535a0527] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration SHA512 = 7839f4272055c68eb3195e01fd465aa8d3e1d0906dde9d63a3a809e61980a8e84b23c29639a35e572df16c7c3dba67ccc987b8535eb396aa10f0126ebf95ca4d
[INFO] Reloading complete
*
* ==> describe nodes <==
* Name: ubuntu-20-agent
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T10_07_14_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 10:07:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 10:15:14 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 10:13:21 +0000 Sat, 14 Jan 2023 10:07:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 10:13:21 +0000 Sat, 14 Jan 2023 10:07:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 10:13:21 +0000 Sat, 14 Jan 2023 10:07:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 10:13:21 +0000 Sat, 14 Jan 2023 10:07:24 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.132.0.4
Hostname: ubuntu-20-agent
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: 591c9f12-2938-3743-e2bf-c56a050d43d1
Boot ID: d08c1bf3-58d2-42f4-a94f-b5b5e908f83a
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (18 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-7d7766f55c-ng2xw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m50s
gcp-auth gcp-auth-6f5c66bfb9-pjmhb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m43s
kube-system coredns-565d847f94-hzdcg 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 7m51s
kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system csi-hostpath-provisioner-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system csi-hostpath-snapshotter-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system csi-hostpathplugin-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system etcd-ubuntu-20-agent 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 8m4s
kube-system kube-apiserver-ubuntu-20-agent 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m4s
kube-system kube-controller-manager-ubuntu-20-agent 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m6s
kube-system kube-proxy-kg2xf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m52s
kube-system kube-scheduler-ubuntu-20-agent 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m4s
kube-system metrics-server-56c6cfbdd9-tg5kv 100m (1%!)(MISSING) 0 (0%!)(MISSING) 200Mi (0%!)(MISSING) 0 (0%!)(MISSING) 7m50s
kube-system snapshot-controller-67c8f9659-hb5bx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m49s
kube-system snapshot-controller-67c8f9659-lcxlj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m50s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m50s
kube-system tiller-deploy-696b5bfbb7-pg8sd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m50s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (1%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m51s kube-proxy
Normal Starting 8m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m4s kubelet Node ubuntu-20-agent status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m4s kubelet Node ubuntu-20-agent status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m4s kubelet Node ubuntu-20-agent status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m4s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 7m54s kubelet Node ubuntu-20-agent status is now: NodeReady
Normal RegisteredNode 7m53s node-controller Node ubuntu-20-agent event: Registered Node ubuntu-20-agent in Controller
*
* ==> dmesg <==
* [Jan14 09:17] #2
[ +0.001147] #3
[ +0.000951] #4
[ +0.003160] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0.001758] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ +0.001399] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
[ +0.004161] #5
[ +0.000803] #6
[ +0.000759] #7
[ +0.058287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.436818] i8042: Warning: Keylock active
[ +0.007559] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.003340] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0.000697] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0.000662] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0.000730] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0.000723] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0.000673] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0.000645] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +9.125102] kauditd_printk_skb: 34 callbacks suppressed
*
* ==> etcd [ab6d08dc2097] <==
* {"level":"info","ts":"2023-01-14T10:07:08.485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 switched to configuration voters=(15265396265148522630)"}
{"level":"info","ts":"2023-01-14T10:07:08.486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","added-peer-id":"d3d995060bc0a086","added-peer-peer-urls":["https://10.132.0.4:2380"]}
{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.132.0.4:2380"}
{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.132.0.4:2380"}
{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3d995060bc0a086","initial-advertise-peer-urls":["https://10.132.0.4:2380"],"listen-peer-urls":["https://10.132.0.4:2380"],"advertise-client-urls":["https://10.132.0.4:2379"],"listen-client-urls":["https://10.132.0.4:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-14T10:07:08.488Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 is starting a new election at term 1"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became pre-candidate at term 1"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgPreVoteResp from d3d995060bc0a086 at term 1"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became candidate at term 2"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 received MsgVoteResp from d3d995060bc0a086 at term 2"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3d995060bc0a086 became leader at term 2"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3d995060bc0a086 elected leader d3d995060bc0a086 at term 2"}
{"level":"info","ts":"2023-01-14T10:07:09.378Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"36fd114adae62b7a","local-member-id":"d3d995060bc0a086","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3d995060bc0a086","local-member-attributes":"{Name:ubuntu-20-agent ClientURLs:[https://10.132.0.4:2379]}","request-path":"/0/members/d3d995060bc0a086/attributes","cluster-id":"36fd114adae62b7a","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T10:07:09.379Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:07:09.380Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T10:07:09.380Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T10:07:09.381Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.132.0.4:2379"}
{"level":"info","ts":"2023-01-14T10:07:09.381Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 10:15:18 up 57 min, 0 users, load average: 0.18, 0.57, 0.43
Linux ubuntu-20-agent 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [2ff1699b4878] <==
* W0114 10:08:29.735793 1 handler_proxy.go:105] no RequestInfo found in the context
E0114 10:08:29.735839 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0114 10:08:29.735846 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0114 10:08:29.736966 1 handler_proxy.go:105] no RequestInfo found in the context
E0114 10:08:29.737043 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0114 10:08:29.737055 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0114 10:10:29.736317 1 handler_proxy.go:105] no RequestInfo found in the context
E0114 10:10:29.736354 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0114 10:10:29.736360 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0114 10:10:29.737430 1 handler_proxy.go:105] no RequestInfo found in the context
E0114 10:10:29.737513 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0114 10:10:29.737535 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0114 10:12:12.170806 1 handler_proxy.go:105] no RequestInfo found in the context
W0114 10:12:12.170806 1 handler_proxy.go:105] no RequestInfo found in the context
E0114 10:12:12.170895 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0114 10:12:12.170902 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0114 10:12:12.170906 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0114 10:12:12.172033 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0114 10:12:19.702602 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
E0114 10:12:19.702941 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
E0114 10:12:19.708055 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
E0114 10:12:19.728858 1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.183.89:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.183.89:443: connect: connection refused
*
* ==> kube-controller-manager [604a7cca50ac] <==
* I0114 10:07:56.669462 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:08:23.011401 1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-create
E0114 10:08:23.015508 1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0114 10:08:23.017363 1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0114 10:08:23.034137 1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-create
I0114 10:08:24.006060 1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-patch
I0114 10:08:24.023237 1 job_controller.go:510] enqueueing job gcp-auth/gcp-auth-certs-patch
E0114 10:08:26.345833 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:08:26.682841 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:08:56.352425 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:08:56.695296 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:09:26.358600 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:09:26.705664 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:09:56.195344 1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0114 10:09:56.197531 1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0114 10:09:56.365404 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:09:56.716232 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:10:26.371474 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:10:26.727043 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:10:56.377359 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:10:56.737522 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:11:26.383986 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:11:26.748518 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0114 10:11:56.390598 1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0114 10:11:56.760126 1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
*
* ==> kube-proxy [4f53bf8a8305] <==
* I0114 10:07:27.611974 1 node.go:163] Successfully retrieved node IP: 10.132.0.4
I0114 10:07:27.612044 1 server_others.go:138] "Detected node IP" address="10.132.0.4"
I0114 10:07:27.612070 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:07:27.631260 1 server_others.go:206] "Using iptables Proxier"
I0114 10:07:27.631301 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0114 10:07:27.631313 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0114 10:07:27.631329 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0114 10:07:27.631364 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:07:27.631508 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:07:27.631705 1 server.go:661] "Version info" version="v1.25.3"
I0114 10:07:27.631724 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:07:27.632227 1 config.go:317] "Starting service config controller"
I0114 10:07:27.632242 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:07:27.632259 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:07:27.632261 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:07:27.632366 1 config.go:444] "Starting node config controller"
I0114 10:07:27.632377 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:07:27.732673 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 10:07:27.732770 1 shared_informer.go:262] Caches are synced for node config
I0114 10:07:27.732788 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-scheduler [9fb1ebe94b48] <==
* E0114 10:07:11.194036 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0114 10:07:11.194048 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0114 10:07:11.193962 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0114 10:07:11.194114 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0114 10:07:11.193929 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0114 10:07:11.194159 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0114 10:07:11.194203 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0114 10:07:11.194226 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0114 10:07:11.194320 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0114 10:07:11.194360 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0114 10:07:12.003311 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0114 10:07:12.003342 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0114 10:07:12.094781 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0114 10:07:12.094810 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0114 10:07:12.107338 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0114 10:07:12.107444 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0114 10:07:12.129544 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0114 10:07:12.129574 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0114 10:07:12.197369 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0114 10:07:12.197394 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0114 10:07:12.266708 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0114 10:07:12.266775 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0114 10:07:12.266708 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0114 10:07:12.266805 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0114 10:07:15.191506 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-12-12 17:50:41 UTC, end at Sat 2023-01-14 10:15:18 UTC. --
Jan 14 10:10:00 ubuntu-20-agent kubelet[17674]: E0114 10:10:00.366805 17674 resource_metrics.go:126] "Error getting summary for resourceMetric prometheus endpoint" err="failed to list pod stats: failed to list all container stats: rpc error: code = Unknown desc = Error response from daemon: No such container: 7e5fd0e68c151c52ee70cce243d5fdfb2b9d8e8ed19a413b814615d1da93e0a2"
Jan 14 10:10:14 ubuntu-20-agent kubelet[17674]: E0114 10:10:14.417063 17674 remote_runtime.go:1050] "ListContainerStats with filter from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c" filter="&ContainerStatsFilter{Id:,PodSandboxId:,LabelSelector:map[string]string{},}"
Jan 14 10:10:14 ubuntu-20-agent kubelet[17674]: E0114 10:10:14.417105 17674 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to list pod stats: failed to list all container stats: rpc error: code = Unknown desc = Error response from daemon: No such container: 9d9123abb65f144d7aec82a411e226f0004a5f07dc4ae821d10cb059d1f6c64c"
Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.699557 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/7.log\": no such file or directory" containerName="etcd"
Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.700422 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/7.log\": no such file or directory" containerName="kube-controller-manager"
Jan 14 10:11:07 ubuntu-20-agent kubelet[17674]: E0114 10:11:07.701124 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/12.log\": no such file or directory" containerName="kube-scheduler"
Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.970522 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/11.log\": no such file or directory" containerName="etcd"
Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.971293 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/15.log\": no such file or directory" containerName="kube-controller-manager"
Jan 14 10:12:00 ubuntu-20-agent kubelet[17674]: E0114 10:12:00.972045 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/13.log\": no such file or directory" containerName="kube-scheduler"
Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.227704 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/21.log\": no such file or directory" containerName="etcd"
Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.228541 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/18.log\": no such file or directory" containerName="kube-controller-manager"
Jan 14 10:12:55 ubuntu-20-agent kubelet[17674]: E0114 10:12:55.229180 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/19.log\": no such file or directory" containerName="kube-scheduler"
Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.485747 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/15.log\": no such file or directory" containerName="etcd"
Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.485960 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/18.log\": no such file or directory" containerName="kube-controller-manager"
Jan 14 10:13:49 ubuntu-20-agent kubelet[17674]: E0114 10:13:49.486082 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/18.log\": no such file or directory" containerName="kube-scheduler"
Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.741779 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_etcd-ubuntu-20-agent_65715dc9e4cbf94a2dca360adc587df7/etcd/10.log\": no such file or directory" containerName="etcd"
Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.742766 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-controller-manager-ubuntu-20-agent_4a4f6d86d11728f017fab2e2d3b5fef6/kube-controller-manager/11.log\": no such file or directory" containerName="kube-controller-manager"
Jan 14 10:14:43 ubuntu-20-agent kubelet[17674]: E0114 10:14:43.743678 17674 cri_stats_provider.go:666] "Unable to fetch container log stats" err="failed to get fsstats for \"/var/log/pods/kube-system_kube-scheduler-ubuntu-20-agent_136b67c9bcacbb6db8fa00666fead41b/kube-scheduler/20.log\": no such file or directory" containerName="kube-scheduler"
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.726620 17674 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpnm7\" (UniqueName: \"kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7\") pod \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\" (UID: \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\") "
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.726700 17674 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir\") pod \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\" (UID: \"99b244b0-02bb-4d7b-8b98-f38c99f1949e\") "
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: W0114 10:15:18.726982 17674 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/99b244b0-02bb-4d7b-8b98-f38c99f1949e/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.727117 17674 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "99b244b0-02bb-4d7b-8b98-f38c99f1949e" (UID: "99b244b0-02bb-4d7b-8b98-f38c99f1949e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.728743 17674 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7" (OuterVolumeSpecName: "kube-api-access-wpnm7") pod "99b244b0-02bb-4d7b-8b98-f38c99f1949e" (UID: "99b244b0-02bb-4d7b-8b98-f38c99f1949e"). InnerVolumeSpecName "kube-api-access-wpnm7". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.827361 17674 reconciler.go:399] "Volume detached for volume \"kube-api-access-wpnm7\" (UniqueName: \"kubernetes.io/projected/99b244b0-02bb-4d7b-8b98-f38c99f1949e-kube-api-access-wpnm7\") on node \"ubuntu-20-agent\" DevicePath \"\""
Jan 14 10:15:18 ubuntu-20-agent kubelet[17674]: I0114 10:15:18.827403 17674 reconciler.go:399] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b244b0-02bb-4d7b-8b98-f38c99f1949e-tmp-dir\") on node \"ubuntu-20-agent\" DevicePath \"\""
*
* ==> storage-provisioner [7c5cbae47eb4] <==
* I0114 10:07:30.298684 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:07:30.307931 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:07:30.307973 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 10:07:30.314738 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 10:07:30.314894 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc!
I0114 10:07:30.314907 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8935411-b4a4-460f-9f6b-35ddc99495f4", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc became leader
I0114 10:07:30.415551 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent_c56e123b-e8b9-491b-96eb-2e83e3e0c4bc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context minikube describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context minikube describe pod : exit status 1 (71.555314ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context minikube describe pod : exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (323.20s)