=== RUN TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run: kubectl --context functional-20220602101615-7689 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:825: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.64.47 PodIP:192.168.64.47 StartTime:2022-06-02 10:16:49 -0700 PDT ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc00111a588 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0004b60e0} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-apiserver:v1.23.6 ImageID:docker-pullable://k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1 ContainerID:docker://ecdb32a6035dbaa348167f10cc767622152b3166294dae28e899b7abf321b2c8}]}
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220602101615-7689 -n functional-20220602101615-7689
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p functional-20220602101615-7689 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220602101615-7689 logs -n 25: (3.138361012s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
| pause | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | pause | | | | | |
| unpause | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | unpause | | | | | |
| unpause | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | unpause | | | | | |
| unpause | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | unpause | | | | | |
| stop | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | stop | | | | | |
| stop | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | stop | | | | | |
| stop | nospam-20220602101526-7689 --log_dir | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-20220602101526-7689 | | | | | |
| | stop | | | | | |
| delete | -p nospam-20220602101526-7689 | nospam-20220602101526-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:16 PDT |
| start | -p | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:16 PDT | 02 Jun 22 10:17 PDT |
| | functional-20220602101615-7689 | | | | | |
| | --memory=4000 | | | | | |
| | --apiserver-port=8441 | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | functional-20220602101615-7689 | | | | | |
| | --alsologtostderr -v=8 | | | | | |
| cache | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | cache add k8s.gcr.io/pause:3.1 | | | | | |
| cache | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | cache add k8s.gcr.io/pause:3.3 | | | | | |
| cache | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | cache add | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| cache | functional-20220602101615-7689 cache add | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | minikube-local-cache-test:functional-20220602101615-7689 | | | | | |
| cache | functional-20220602101615-7689 cache delete | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | minikube-local-cache-test:functional-20220602101615-7689 | | | | | |
| cache | delete k8s.gcr.io/pause:3.3 | minikube | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| cache | list | minikube | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| ssh | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | ssh sudo crictl images | | | | | |
| ssh | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | ssh sudo docker rmi | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| cache | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | cache reload | | | | | |
| ssh | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | ssh sudo crictl inspecti | | | | | |
| | k8s.gcr.io/pause:latest | | | | | |
| cache | delete k8s.gcr.io/pause:3.1 | minikube | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| cache | delete k8s.gcr.io/pause:latest | minikube | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| kubectl | functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | kubectl -- --context | | | | | |
| | functional-20220602101615-7689 | | | | | |
| | get pods | | | | | |
| start | -p functional-20220602101615-7689 | functional-20220602101615-7689 | jenkins | v1.26.0-beta.1 | 02 Jun 22 10:17 PDT | 02 Jun 22 10:17 PDT |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
|---------|-----------------------------------------------------------------------------|--------------------------------|---------|----------------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/06/02 10:17:20
Running on machine: 20446
Binary: Built with gc go1.18.2 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0602 10:17:20.908062 8182 out.go:296] Setting OutFile to fd 1 ...
I0602 10:17:20.908282 8182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 10:17:20.908285 8182 out.go:309] Setting ErrFile to fd 2...
I0602 10:17:20.908288 8182 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 10:17:20.908394 8182 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
I0602 10:17:20.908649 8182 out.go:303] Setting JSON to false
I0602 10:17:20.923553 8182 start.go:115] hostinfo: {"hostname":"20446.local","uptime":4610,"bootTime":1654185630,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0602 10:17:20.923641 8182 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0602 10:17:20.944464 8182 out.go:177] * [functional-20220602101615-7689] minikube v1.26.0-beta.1 on Darwin 12.4
I0602 10:17:20.987805 8182 notify.go:193] Checking for updates...
I0602 10:17:21.009503 8182 out.go:177] - MINIKUBE_LOCATION=14269
I0602 10:17:21.030493 8182 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
I0602 10:17:21.051585 8182 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0602 10:17:21.073651 8182 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0602 10:17:21.095652 8182 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
I0602 10:17:21.118029 8182 config.go:178] Loaded profile config "functional-20220602101615-7689": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 10:17:21.118103 8182 driver.go:358] Setting default libvirt URI to qemu:///system
I0602 10:17:21.118752 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:21.118829 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:21.125792 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55801
I0602 10:17:21.126186 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:21.126616 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:21.126625 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:21.126857 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:21.126957 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:21.154441 8182 out.go:177] * Using the hyperkit driver based on existing profile
I0602 10:17:21.197605 8182 start.go:284] selected driver: hyperkit
I0602 10:17:21.197622 8182 start.go:806] validating driver "hyperkit" against &{Name:functional-20220602101615-7689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPor
t:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602101615-7689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0602 10:17:21.197869 8182 start.go:817] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0602 10:17:21.198108 8182 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0602 10:17:21.198238 8182 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0602 10:17:21.205746 8182 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.26.0-beta.1
I0602 10:17:21.208655 8182 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:21.208667 8182 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0602 10:17:21.210527 8182 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0602 10:17:21.210553 8182 cni.go:95] Creating CNI manager for ""
I0602 10:17:21.210560 8182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0602 10:17:21.210574 8182 start_flags.go:306] config:
{Name:functional-20220602101615-7689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-2022060210
1615-7689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0602 10:17:21.210729 8182 iso.go:128] acquiring lock: {Name:mk50f42f5c5f9c8de34a31b123558a500a642ca4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0602 10:17:21.253612 8182 out.go:177] * Starting control plane node functional-20220602101615-7689 in cluster functional-20220602101615-7689
I0602 10:17:21.275672 8182 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0602 10:17:21.275747 8182 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
I0602 10:17:21.275778 8182 cache.go:57] Caching tarball of preloaded images
I0602 10:17:21.275947 8182 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0602 10:17:21.275967 8182 cache.go:60] Finished verifying existence of preloaded tar for v1.23.6 on docker
I0602 10:17:21.276174 8182 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/config.json ...
I0602 10:17:21.276951 8182 cache.go:206] Successfully downloaded all kic artifacts
I0602 10:17:21.277013 8182 start.go:352] acquiring machines lock for functional-20220602101615-7689: {Name:mkea1b4fd445295b0915910ea8dd4baf7641e4d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0602 10:17:21.277112 8182 start.go:356] acquired machines lock for "functional-20220602101615-7689" in 83.722µs
I0602 10:17:21.277142 8182 start.go:94] Skipping create...Using existing machine configuration
I0602 10:17:21.277154 8182 fix.go:55] fixHost starting:
I0602 10:17:21.277592 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:21.277617 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:21.284598 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55803
I0602 10:17:21.284986 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:21.285315 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:21.285322 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:21.285510 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:21.285628 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:21.285715 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetState
I0602 10:17:21.285804 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0602 10:17:21.285872 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | hyperkit pid from json: 8066
I0602 10:17:21.286675 8182 fix.go:103] recreateIfNeeded on functional-20220602101615-7689: state=Running err=<nil>
W0602 10:17:21.286687 8182 fix.go:129] unexpected machine state, will restart: <nil>
I0602 10:17:21.329498 8182 out.go:177] * Updating the running hyperkit "functional-20220602101615-7689" VM ...
I0602 10:17:21.350565 8182 machine.go:88] provisioning docker machine ...
I0602 10:17:21.350591 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:21.350896 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetMachineName
I0602 10:17:21.351075 8182 buildroot.go:166] provisioning hostname "functional-20220602101615-7689"
I0602 10:17:21.351102 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetMachineName
I0602 10:17:21.351299 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.351479 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.351697 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.351904 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.352039 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.352273 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:21.352570 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:21.352584 8182 main.go:134] libmachine: About to run SSH command:
sudo hostname functional-20220602101615-7689 && echo "functional-20220602101615-7689" | sudo tee /etc/hostname
I0602 10:17:21.431239 8182 main.go:134] libmachine: SSH cmd err, output: <nil>: functional-20220602101615-7689
I0602 10:17:21.431251 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.431393 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.431479 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.431550 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.431626 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.431738 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:21.431844 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:21.431853 8182 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-20220602101615-7689' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220602101615-7689/g' /etc/hosts;
else
echo '127.0.1.1 functional-20220602101615-7689' | sudo tee -a /etc/hosts;
fi
fi
I0602 10:17:21.500598 8182 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0602 10:17:21.500612 8182 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
I0602 10:17:21.500627 8182 buildroot.go:174] setting up certificates
I0602 10:17:21.500637 8182 provision.go:83] configureAuth start
I0602 10:17:21.500642 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetMachineName
I0602 10:17:21.500764 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetIP
I0602 10:17:21.500851 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.500936 8182 provision.go:138] copyHostCerts
I0602 10:17:21.501007 8182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
I0602 10:17:21.501016 8182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
I0602 10:17:21.501117 8182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
I0602 10:17:21.501304 8182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
I0602 10:17:21.501311 8182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
I0602 10:17:21.501368 8182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
I0602 10:17:21.501501 8182 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
I0602 10:17:21.501507 8182 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
I0602 10:17:21.501561 8182 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
I0602 10:17:21.501677 8182 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.functional-20220602101615-7689 san=[192.168.64.47 192.168.64.47 localhost 127.0.0.1 minikube functional-20220602101615-7689]
I0602 10:17:21.670862 8182 provision.go:172] copyRemoteCerts
I0602 10:17:21.670920 8182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0602 10:17:21.670940 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.671090 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.671209 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.671313 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.671404 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:21.711472 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0602 10:17:21.726107 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0602 10:17:21.745424 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
I0602 10:17:21.760841 8182 provision.go:86] duration metric: configureAuth took 260.187616ms
I0602 10:17:21.760852 8182 buildroot.go:189] setting minikube options for container-runtime
I0602 10:17:21.761001 8182 config.go:178] Loaded profile config "functional-20220602101615-7689": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 10:17:21.761025 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:21.761153 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.761251 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.761328 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.761394 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.761467 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.761570 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:21.761664 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:21.761669 8182 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0602 10:17:21.829126 8182 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0602 10:17:21.829133 8182 buildroot.go:70] root file system type: tmpfs
I0602 10:17:21.829246 8182 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0602 10:17:21.829267 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.829400 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.829478 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.829551 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.829629 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.829753 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:21.829861 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:21.829905 8182 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0602 10:17:21.906783 8182 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0602 10:17:21.906799 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.906932 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.907004 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.907088 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.907183 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.907311 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:21.907418 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:21.907427 8182 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0602 10:17:21.977746 8182 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0602 10:17:21.977758 8182 machine.go:91] provisioned docker machine in 627.168514ms
I0602 10:17:21.977766 8182 start.go:306] post-start starting for "functional-20220602101615-7689" (driver="hyperkit")
I0602 10:17:21.977769 8182 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0602 10:17:21.977780 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:21.977948 8182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0602 10:17:21.977959 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:21.978042 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:21.978117 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:21.978193 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:21.978254 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:22.016002 8182 ssh_runner.go:195] Run: cat /etc/os-release
I0602 10:17:22.018293 8182 info.go:137] Remote host: Buildroot 2021.02.12
I0602 10:17:22.018301 8182 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
I0602 10:17:22.018406 8182 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
I0602 10:17:22.018537 8182 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/76892.pem -> 76892.pem in /etc/ssl/certs
I0602 10:17:22.018662 8182 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/7689/hosts -> hosts in /etc/test/nested/copy/7689
I0602 10:17:22.018704 8182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7689
I0602 10:17:22.024782 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/76892.pem --> /etc/ssl/certs/76892.pem (1708 bytes)
I0602 10:17:22.040405 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/7689/hosts --> /etc/test/nested/copy/7689/hosts (40 bytes)
I0602 10:17:22.055657 8182 start.go:309] post-start completed in 77.882656ms
I0602 10:17:22.055670 8182 fix.go:57] fixHost completed within 778.506834ms
I0602 10:17:22.055682 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:22.055803 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:22.055881 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:22.055955 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:22.056026 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:22.056112 8182 main.go:134] libmachine: Using SSH client type: native
I0602 10:17:22.056215 8182 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil> [] 0s} 192.168.64.47 22 <nil> <nil>}
I0602 10:17:22.056219 8182 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0602 10:17:22.121808 8182 main.go:134] libmachine: SSH cmd err, output: <nil>: 1654190242.202540968
I0602 10:17:22.121813 8182 fix.go:207] guest clock: 1654190242.202540968
I0602 10:17:22.121817 8182 fix.go:220] Guest: 2022-06-02 10:17:22.202540968 -0700 PDT Remote: 2022-06-02 10:17:22.055672 -0700 PDT m=+1.189541630 (delta=146.868968ms)
I0602 10:17:22.121836 8182 fix.go:191] guest clock delta is within tolerance: 146.868968ms
I0602 10:17:22.121838 8182 start.go:81] releasing machines lock for "functional-20220602101615-7689", held for 844.703825ms
I0602 10:17:22.121854 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.121976 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetIP
I0602 10:17:22.122056 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.122141 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.122207 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.122509 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.122636 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:22.122711 8182 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0602 10:17:22.122734 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:22.122772 8182 ssh_runner.go:195] Run: systemctl --version
I0602 10:17:22.122780 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:22.122813 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:22.122840 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:22.122880 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:22.122914 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:22.122949 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:22.122975 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:22.123004 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:22.123036 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:22.160046 8182 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0602 10:17:22.160135 8182 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0602 10:17:22.276989 8182 docker.go:610] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-20220602101615-7689
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/pause:latest
-- /stdout --
I0602 10:17:22.277004 8182 docker.go:541] Images already preloaded, skipping extraction
I0602 10:17:22.277096 8182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0602 10:17:22.286998 8182 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0602 10:17:22.296199 8182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0602 10:17:22.304896 8182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0602 10:17:22.313848 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0602 10:17:22.324617 8182 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0602 10:17:22.453125 8182 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0602 10:17:22.578688 8182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0602 10:17:22.707277 8182 ssh_runner.go:195] Run: sudo systemctl start docker
I0602 10:17:22.717362 8182 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0602 10:17:22.740712 8182 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0602 10:17:22.788616 8182 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
I0602 10:17:22.788743 8182 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0602 10:17:22.813260 8182 out.go:177] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I0602 10:17:22.834434 8182 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
I0602 10:17:22.834563 8182 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0602 10:17:22.861443 8182 docker.go:610] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-20220602101615-7689
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/pause:latest
-- /stdout --
I0602 10:17:22.861451 8182 docker.go:541] Images already preloaded, skipping extraction
I0602 10:17:22.861506 8182 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0602 10:17:22.880498 8182 docker.go:610] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-20220602101615-7689
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/pause:latest
-- /stdout --
I0602 10:17:22.880513 8182 cache_images.go:84] Images are preloaded, skipping loading
I0602 10:17:22.880570 8182 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0602 10:17:22.906050 8182 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I0602 10:17:22.906070 8182 cni.go:95] Creating CNI manager for ""
I0602 10:17:22.906075 8182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0602 10:17:22.906082 8182 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0602 10:17:22.906089 8182 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.47 APIServerPort:8441 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220602101615-7689 NodeName:functional-20220602101615-7689 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.47 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0602 10:17:22.906163 8182 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.47
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "functional-20220602101615-7689"
kubeletExtraArgs:
node-ip: 192.168.64.47
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.47"]
extraArgs:
enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.6
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0602 10:17:22.906210 8182 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220602101615-7689 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.47
[Install]
config:
{KubernetesVersion:v1.23.6 ClusterName:functional-20220602101615-7689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
I0602 10:17:22.906260 8182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
I0602 10:17:22.912764 8182 binaries.go:44] Found k8s binaries, skipping transfer
I0602 10:17:22.912805 8182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0602 10:17:22.918930 8182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
I0602 10:17:22.930542 8182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0602 10:17:22.946385 8182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1904 bytes)
I0602 10:17:22.957243 8182 ssh_runner.go:195] Run: grep 192.168.64.47 control-plane.minikube.internal$ /etc/hosts
I0602 10:17:22.959614 8182 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689 for IP: 192.168.64.47
I0602 10:17:22.959702 8182 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
I0602 10:17:22.959754 8182 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
I0602 10:17:22.959828 8182 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/client.key
I0602 10:17:22.959882 8182 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/apiserver.key.98630bf9
I0602 10:17:22.959928 8182 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/proxy-client.key
I0602 10:17:22.960103 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/7689.pem (1338 bytes)
W0602 10:17:22.960134 8182 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/7689_empty.pem, impossibly tiny 0 bytes
I0602 10:17:22.960148 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
I0602 10:17:22.960175 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
I0602 10:17:22.960203 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
I0602 10:17:22.960227 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
I0602 10:17:22.960285 8182 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/76892.pem (1708 bytes)
I0602 10:17:22.960759 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0602 10:17:22.976813 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0602 10:17:22.993682 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0602 10:17:23.009899 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602101615-7689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0602 10:17:23.025742 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0602 10:17:23.044803 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0602 10:17:23.064360 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0602 10:17:23.079989 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0602 10:17:23.095416 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/7689.pem --> /usr/share/ca-certificates/7689.pem (1338 bytes)
I0602 10:17:23.110935 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/76892.pem --> /usr/share/ca-certificates/76892.pem (1708 bytes)
I0602 10:17:23.127477 8182 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0602 10:17:23.143474 8182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0602 10:17:23.154413 8182 ssh_runner.go:195] Run: openssl version
I0602 10:17:23.157803 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7689.pem && ln -fs /usr/share/ca-certificates/7689.pem /etc/ssl/certs/7689.pem"
I0602 10:17:23.165137 8182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7689.pem
I0602 10:17:23.167964 8182 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 2 17:16 /usr/share/ca-certificates/7689.pem
I0602 10:17:23.167992 8182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7689.pem
I0602 10:17:23.171619 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7689.pem /etc/ssl/certs/51391683.0"
I0602 10:17:23.177678 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76892.pem && ln -fs /usr/share/ca-certificates/76892.pem /etc/ssl/certs/76892.pem"
I0602 10:17:23.185153 8182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76892.pem
I0602 10:17:23.188222 8182 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 2 17:16 /usr/share/ca-certificates/76892.pem
I0602 10:17:23.188248 8182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76892.pem
I0602 10:17:23.191888 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76892.pem /etc/ssl/certs/3ec20f2e.0"
I0602 10:17:23.197927 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0602 10:17:23.205260 8182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0602 10:17:23.208046 8182 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 2 17:12 /usr/share/ca-certificates/minikubeCA.pem
I0602 10:17:23.208076 8182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0602 10:17:23.211448 8182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0602 10:17:23.217444 8182 kubeadm.go:395] StartCluster: {Name:functional-20220602101615-7689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.23.6 ClusterName:functional-20220602101615-7689 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0602 10:17:23.217536 8182 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0602 10:17:23.236141 8182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0602 10:17:23.242495 8182 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0602 10:17:23.242505 8182 kubeadm.go:626] restartCluster start
I0602 10:17:23.242539 8182 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0602 10:17:23.249440 8182 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0602 10:17:23.249792 8182 kubeconfig.go:92] found "functional-20220602101615-7689" server: "https://192.168.64.47:8441"
I0602 10:17:23.250484 8182 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0602 10:17:23.256408 8182 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -22,7 +22,7 @@
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.47"]
extraArgs:
- enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
-- /stdout --
I0602 10:17:23.256415 8182 kubeadm.go:1092] stopping kube-system containers ...
I0602 10:17:23.256453 8182 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0602 10:17:23.276466 8182 docker.go:442] Stopping containers: [d6dc4f8ab1fb c87186e6c0f6 d12d254c553e 64dff3579d0e 8f0abc31d1c6 ba819c9c48d5 28e50ad5f267 c877ee90061f 2e20450262aa 1582f1012bc6 8c46fe91dcbb 343b86a926ce f1500ac0c521 6728c241eb9d 27824dc20d35]
I0602 10:17:23.276528 8182 ssh_runner.go:195] Run: docker stop d6dc4f8ab1fb c87186e6c0f6 d12d254c553e 64dff3579d0e 8f0abc31d1c6 ba819c9c48d5 28e50ad5f267 c877ee90061f 2e20450262aa 1582f1012bc6 8c46fe91dcbb 343b86a926ce f1500ac0c521 6728c241eb9d 27824dc20d35
I0602 10:17:28.450135 8182 ssh_runner.go:235] Completed: docker stop d6dc4f8ab1fb c87186e6c0f6 d12d254c553e 64dff3579d0e 8f0abc31d1c6 ba819c9c48d5 28e50ad5f267 c877ee90061f 2e20450262aa 1582f1012bc6 8c46fe91dcbb 343b86a926ce f1500ac0c521 6728c241eb9d 27824dc20d35: (5.173487408s)
I0602 10:17:28.450182 8182 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0602 10:17:28.497702 8182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0602 10:17:28.505753 8182 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jun 2 17:16 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jun 2 17:16 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2059 Jun 2 17:16 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jun 2 17:16 /etc/kubernetes/scheduler.conf
I0602 10:17:28.505798 8182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I0602 10:17:28.512829 8182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I0602 10:17:28.518591 8182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I0602 10:17:28.525752 8182 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0602 10:17:28.525795 8182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0602 10:17:28.531544 8182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I0602 10:17:28.537792 8182 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0602 10:17:28.537842 8182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0602 10:17:28.544656 8182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0602 10:17:28.551883 8182 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0602 10:17:28.551893 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:28.588797 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:29.268525 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:29.523146 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:29.576254 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:29.628654 8182 api_server.go:51] waiting for apiserver process to appear ...
I0602 10:17:29.628705 8182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0602 10:17:29.638883 8182 api_server.go:71] duration metric: took 10.235245ms to wait for apiserver process to appear ...
I0602 10:17:29.638891 8182 api_server.go:87] waiting for apiserver healthz status ...
I0602 10:17:29.638901 8182 api_server.go:240] Checking apiserver healthz at https://192.168.64.47:8441/healthz ...
I0602 10:17:29.643492 8182 api_server.go:266] https://192.168.64.47:8441/healthz returned 200:
ok
I0602 10:17:29.648962 8182 api_server.go:140] control plane version: v1.23.6
I0602 10:17:29.648970 8182 api_server.go:130] duration metric: took 10.075818ms to wait for apiserver health ...
I0602 10:17:29.648976 8182 cni.go:95] Creating CNI manager for ""
I0602 10:17:29.648979 8182 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0602 10:17:29.648986 8182 system_pods.go:43] waiting for kube-system pods to appear ...
I0602 10:17:29.654409 8182 system_pods.go:59] 7 kube-system pods found
I0602 10:17:29.654417 8182 system_pods.go:61] "coredns-64897985d-4lv74" [66c4d521-7d3f-4f49-b9b4-8310a79d576f] Running
I0602 10:17:29.654423 8182 system_pods.go:61] "etcd-functional-20220602101615-7689" [e0096da1-a437-4a60-96e0-9f5a4131b0f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0602 10:17:29.654426 8182 system_pods.go:61] "kube-apiserver-functional-20220602101615-7689" [6d298576-fa23-4014-811b-53705349c461] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0602 10:17:29.654432 8182 system_pods.go:61] "kube-controller-manager-functional-20220602101615-7689" [4dc96d50-ab16-49c8-bc29-5ec1ff634117] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0602 10:17:29.654435 8182 system_pods.go:61] "kube-proxy-pbh7c" [42a1cf5c-8a5b-4c27-af07-3715bdec9df6] Running
I0602 10:17:29.654437 8182 system_pods.go:61] "kube-scheduler-functional-20220602101615-7689" [068da6f3-b91f-4add-894f-eeb93df9105f] Running
I0602 10:17:29.654440 8182 system_pods.go:61] "storage-provisioner" [65bd2b64-e2ad-42e4-ac30-731c3241a5a9] Running
I0602 10:17:29.654442 8182 system_pods.go:74] duration metric: took 5.453088ms to wait for pod list to return data ...
I0602 10:17:29.654446 8182 node_conditions.go:102] verifying NodePressure condition ...
I0602 10:17:29.656581 8182 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0602 10:17:29.656593 8182 node_conditions.go:123] node cpu capacity is 2
I0602 10:17:29.656601 8182 node_conditions.go:105] duration metric: took 2.153233ms to run NodePressure ...
I0602 10:17:29.656611 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0602 10:17:29.821332 8182 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0602 10:17:29.824420 8182 kubeadm.go:777] kubelet initialised
I0602 10:17:29.824426 8182 kubeadm.go:778] duration metric: took 3.085765ms waiting for restarted kubelet to initialise ...
I0602 10:17:29.824433 8182 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0602 10:17:29.827720 8182 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-4lv74" in "kube-system" namespace to be "Ready" ...
I0602 10:17:29.830982 8182 pod_ready.go:92] pod "coredns-64897985d-4lv74" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:29.830986 8182 pod_ready.go:81] duration metric: took 3.259212ms waiting for pod "coredns-64897985d-4lv74" in "kube-system" namespace to be "Ready" ...
I0602 10:17:29.831004 8182 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:32.938296 8182 pod_ready.go:97] error getting pod "etcd-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.64.1:55823->192.168.64.47:8441: read: connection reset by peer
I0602 10:17:32.938314 8182 pod_ready.go:81] duration metric: took 3.107241198s waiting for pod "etcd-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
E0602 10:17:32.938323 8182 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.64.1:55823->192.168.64.47:8441: read: connection reset by peer
I0602 10:17:32.938340 8182 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:33.038505 8182 pod_ready.go:97] error getting pod "kube-apiserver-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.038517 8182 pod_ready.go:81] duration metric: took 100.166394ms waiting for pod "kube-apiserver-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
E0602 10:17:33.038527 8182 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.038545 8182 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:33.139293 8182 pod_ready.go:97] error getting pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.139310 8182 pod_ready.go:81] duration metric: took 100.756399ms waiting for pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
E0602 10:17:33.139318 8182 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.139330 8182 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pbh7c" in "kube-system" namespace to be "Ready" ...
I0602 10:17:33.239439 8182 pod_ready.go:97] error getting pod "kube-proxy-pbh7c" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pbh7c": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.239454 8182 pod_ready.go:81] duration metric: took 100.11226ms waiting for pod "kube-proxy-pbh7c" in "kube-system" namespace to be "Ready" ...
E0602 10:17:33.239463 8182 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-pbh7c" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pbh7c": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.239478 8182 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:33.341658 8182 pod_ready.go:97] error getting pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.341670 8182 pod_ready.go:81] duration metric: took 102.182146ms waiting for pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
E0602 10:17:33.341680 8182 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace (skipping!): Get "https://192.168.64.47:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220602101615-7689": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:33.341700 8182 pod_ready.go:38] duration metric: took 3.517190436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0602 10:17:33.341723 8182 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
W0602 10:17:33.349761 8182 kubeadm.go:786] unable to adjust resource limits: oom_adj check cmd /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj". : /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": Process exited with status 1
stdout:
stderr:
cat: /proc//oom_adj: No such file or directory
I0602 10:17:33.349769 8182 kubeadm.go:630] restartCluster took 10.107056884s
I0602 10:17:33.349772 8182 kubeadm.go:397] StartCluster complete in 10.132141443s
I0602 10:17:33.349781 8182 settings.go:142] acquiring lock: {Name:mkae2f61d6d3dee33f3dbb05f2858932ccc07616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0602 10:17:33.349858 8182 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
I0602 10:17:33.350228 8182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk37fa5eca82bd1ac80870d3a9f81a744c61ec49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W0602 10:17:33.452153 8182 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.64.47:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.64.47:8441: connect: connection refused
I0602 10:17:35.760150 8182 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220602101615-7689" rescaled to 1
I0602 10:17:35.760176 8182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0602 10:17:35.760176 8182 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.64.47 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
I0602 10:17:35.760234 8182 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
I0602 10:17:35.843500 8182 out.go:177] * Verifying Kubernetes components...
I0602 10:17:35.760367 8182 config.go:178] Loaded profile config "functional-20220602101615-7689": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 10:17:35.843572 8182 addons.go:65] Setting storage-provisioner=true in profile "functional-20220602101615-7689"
I0602 10:17:35.843593 8182 addons.go:65] Setting default-storageclass=true in profile "functional-20220602101615-7689"
I0602 10:17:35.849073 8182 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0602 10:17:35.863614 8182 addons.go:153] Setting addon storage-provisioner=true in "functional-20220602101615-7689"
W0602 10:17:35.863624 8182 addons.go:165] addon storage-provisioner should already be in state true
I0602 10:17:35.863635 8182 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220602101615-7689"
I0602 10:17:35.863643 8182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0602 10:17:35.863676 8182 host.go:66] Checking if "functional-20220602101615-7689" exists ...
I0602 10:17:35.863939 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:35.863958 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:35.863994 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:35.864012 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:35.871015 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55831
I0602 10:17:35.871138 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55832
I0602 10:17:35.871449 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:35.871618 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:35.871786 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:35.871794 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:35.871919 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:35.871926 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:35.871988 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:35.872139 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:35.872220 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetState
I0602 10:17:35.872306 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0602 10:17:35.872342 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:35.872386 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:35.872387 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | hyperkit pid from json: 8066
I0602 10:17:35.878932 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55835
I0602 10:17:35.879275 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:35.879617 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:35.879623 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:35.879838 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:35.879934 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetState
I0602 10:17:35.880009 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0602 10:17:35.880111 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | hyperkit pid from json: 8066
I0602 10:17:35.880906 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:35.881923 8182 addons.go:153] Setting addon default-storageclass=true in "functional-20220602101615-7689"
I0602 10:17:35.901532 8182 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
W0602 10:17:35.901532 8182 addons.go:165] addon default-storageclass should already be in state true
I0602 10:17:35.884963 8182 node_ready.go:35] waiting up to 6m0s for node "functional-20220602101615-7689" to be "Ready" ...
I0602 10:17:35.901572 8182 host.go:66] Checking if "functional-20220602101615-7689" exists ...
I0602 10:17:35.922827 8182 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0602 10:17:35.923013 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:35.959788 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:35.959785 8182 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0602 10:17:35.959810 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:35.960574 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:35.960854 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:35.961002 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:35.961459 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:35.962900 8182 node_ready.go:49] node "functional-20220602101615-7689" has status "Ready":"True"
I0602 10:17:35.962905 8182 node_ready.go:38] duration metric: took 40.187463ms waiting for node "functional-20220602101615-7689" to be "Ready" ...
I0602 10:17:35.962910 8182 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0602 10:17:35.966783 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55838
I0602 10:17:35.967097 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:35.967426 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:35.967434 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:35.967486 8182 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-4lv74" in "kube-system" namespace to be "Ready" ...
I0602 10:17:35.967636 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:35.967976 8182 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0602 10:17:35.967994 8182 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0602 10:17:35.974490 8182 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:55840
I0602 10:17:35.974854 8182 main.go:134] libmachine: () Calling .GetVersion
I0602 10:17:35.974947 8182 pod_ready.go:92] pod "coredns-64897985d-4lv74" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:35.974950 8182 pod_ready.go:81] duration metric: took 7.459356ms waiting for pod "coredns-64897985d-4lv74" in "kube-system" namespace to be "Ready" ...
I0602 10:17:35.974955 8182 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:35.975174 8182 main.go:134] libmachine: Using API Version 1
I0602 10:17:35.975185 8182 main.go:134] libmachine: () Calling .SetConfigRaw
I0602 10:17:35.975377 8182 main.go:134] libmachine: () Calling .GetMachineName
I0602 10:17:35.975463 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetState
I0602 10:17:35.975542 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0602 10:17:35.975630 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | hyperkit pid from json: 8066
I0602 10:17:35.976433 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .DriverName
I0602 10:17:35.976625 8182 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0602 10:17:35.976630 8182 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0602 10:17:35.976657 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHHostname
I0602 10:17:35.976741 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHPort
I0602 10:17:35.976815 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHKeyPath
I0602 10:17:35.977348 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .GetSSHUsername
I0602 10:17:35.977560 8182 sshutil.go:53] new ssh client: &{IP:192.168.64.47 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14269-6552-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602101615-7689/id_rsa Username:docker}
I0602 10:17:36.011025 8182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0602 10:17:36.022287 8182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0602 10:17:36.562782 8182 main.go:134] libmachine: Making call to close driver server
I0602 10:17:36.562791 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .Close
I0602 10:17:36.562955 8182 main.go:134] libmachine: Successfully made call to close driver server
I0602 10:17:36.562960 8182 main.go:134] libmachine: Making call to close connection to plugin binary
I0602 10:17:36.562964 8182 main.go:134] libmachine: Making call to close driver server
I0602 10:17:36.562968 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .Close
I0602 10:17:36.562977 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | Closing plugin on server side
I0602 10:17:36.563119 8182 main.go:134] libmachine: Successfully made call to close driver server
I0602 10:17:36.563128 8182 main.go:134] libmachine: Making call to close connection to plugin binary
I0602 10:17:36.563142 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | Closing plugin on server side
I0602 10:17:36.573071 8182 main.go:134] libmachine: Making call to close driver server
I0602 10:17:36.573079 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .Close
I0602 10:17:36.573235 8182 main.go:134] libmachine: Successfully made call to close driver server
I0602 10:17:36.573236 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | Closing plugin on server side
I0602 10:17:36.573242 8182 main.go:134] libmachine: Making call to close connection to plugin binary
I0602 10:17:36.573248 8182 main.go:134] libmachine: Making call to close driver server
I0602 10:17:36.573252 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .Close
I0602 10:17:36.573353 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | Closing plugin on server side
I0602 10:17:36.573372 8182 main.go:134] libmachine: Successfully made call to close driver server
I0602 10:17:36.573377 8182 main.go:134] libmachine: Making call to close connection to plugin binary
I0602 10:17:36.573385 8182 main.go:134] libmachine: Making call to close driver server
I0602 10:17:36.573390 8182 main.go:134] libmachine: (functional-20220602101615-7689) Calling .Close
I0602 10:17:36.573516 8182 main.go:134] libmachine: Successfully made call to close driver server
I0602 10:17:36.573519 8182 main.go:134] libmachine: (functional-20220602101615-7689) DBG | Closing plugin on server side
I0602 10:17:36.573533 8182 main.go:134] libmachine: Making call to close connection to plugin binary
I0602 10:17:36.595420 8182 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0602 10:17:36.637924 8182 addons.go:417] enableAddons completed in 877.660674ms
I0602 10:17:37.990258 8182 pod_ready.go:102] pod "etcd-functional-20220602101615-7689" in "kube-system" namespace has status "Ready":"False"
I0602 10:17:39.993209 8182 pod_ready.go:102] pod "etcd-functional-20220602101615-7689" in "kube-system" namespace has status "Ready":"False"
I0602 10:17:40.989213 8182 pod_ready.go:92] pod "etcd-functional-20220602101615-7689" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:40.989221 8182 pod_ready.go:81] duration metric: took 5.014160281s waiting for pod "etcd-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:40.989226 8182 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:41.501947 8182 pod_ready.go:92] pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:41.501955 8182 pod_ready.go:81] duration metric: took 512.715612ms waiting for pod "kube-controller-manager-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:41.501963 8182 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbh7c" in "kube-system" namespace to be "Ready" ...
I0602 10:17:41.504750 8182 pod_ready.go:92] pod "kube-proxy-pbh7c" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:41.504753 8182 pod_ready.go:81] duration metric: took 2.787048ms waiting for pod "kube-proxy-pbh7c" in "kube-system" namespace to be "Ready" ...
I0602 10:17:41.504757 8182 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:43.522953 8182 pod_ready.go:92] pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace has status "Ready":"True"
I0602 10:17:43.522961 8182 pod_ready.go:81] duration metric: took 2.018159788s waiting for pod "kube-scheduler-functional-20220602101615-7689" in "kube-system" namespace to be "Ready" ...
I0602 10:17:43.522966 8182 pod_ready.go:38] duration metric: took 7.559896337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0602 10:17:43.522981 8182 api_server.go:51] waiting for apiserver process to appear ...
I0602 10:17:43.523027 8182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0602 10:17:43.531901 8182 api_server.go:71] duration metric: took 7.77155479s to wait for apiserver process to appear ...
I0602 10:17:43.531911 8182 api_server.go:87] waiting for apiserver healthz status ...
I0602 10:17:43.531916 8182 api_server.go:240] Checking apiserver healthz at https://192.168.64.47:8441/healthz ...
I0602 10:17:43.535626 8182 api_server.go:266] https://192.168.64.47:8441/healthz returned 200:
ok
I0602 10:17:43.536173 8182 api_server.go:140] control plane version: v1.23.6
I0602 10:17:43.536177 8182 api_server.go:130] duration metric: took 4.263572ms to wait for apiserver health ...
I0602 10:17:43.536182 8182 system_pods.go:43] waiting for kube-system pods to appear ...
I0602 10:17:43.539278 8182 system_pods.go:59] 7 kube-system pods found
I0602 10:17:43.539283 8182 system_pods.go:61] "coredns-64897985d-4lv74" [66c4d521-7d3f-4f49-b9b4-8310a79d576f] Running
I0602 10:17:43.539288 8182 system_pods.go:61] "etcd-functional-20220602101615-7689" [e0096da1-a437-4a60-96e0-9f5a4131b0f8] Running
I0602 10:17:43.539292 8182 system_pods.go:61] "kube-apiserver-functional-20220602101615-7689" [2629eef3-b21f-475b-bbc5-0459910ee01d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0602 10:17:43.539296 8182 system_pods.go:61] "kube-controller-manager-functional-20220602101615-7689" [4dc96d50-ab16-49c8-bc29-5ec1ff634117] Running
I0602 10:17:43.539299 8182 system_pods.go:61] "kube-proxy-pbh7c" [42a1cf5c-8a5b-4c27-af07-3715bdec9df6] Running
I0602 10:17:43.539301 8182 system_pods.go:61] "kube-scheduler-functional-20220602101615-7689" [068da6f3-b91f-4add-894f-eeb93df9105f] Running
I0602 10:17:43.539303 8182 system_pods.go:61] "storage-provisioner" [65bd2b64-e2ad-42e4-ac30-731c3241a5a9] Running
I0602 10:17:43.539305 8182 system_pods.go:74] duration metric: took 3.120982ms to wait for pod list to return data ...
I0602 10:17:43.539308 8182 default_sa.go:34] waiting for default service account to be created ...
I0602 10:17:43.540597 8182 default_sa.go:45] found service account: "default"
I0602 10:17:43.540600 8182 default_sa.go:55] duration metric: took 1.28996ms for default service account to be created ...
I0602 10:17:43.540602 8182 system_pods.go:116] waiting for k8s-apps to be running ...
I0602 10:17:43.543683 8182 system_pods.go:86] 7 kube-system pods found
I0602 10:17:43.543689 8182 system_pods.go:89] "coredns-64897985d-4lv74" [66c4d521-7d3f-4f49-b9b4-8310a79d576f] Running
I0602 10:17:43.543695 8182 system_pods.go:89] "etcd-functional-20220602101615-7689" [e0096da1-a437-4a60-96e0-9f5a4131b0f8] Running
I0602 10:17:43.543701 8182 system_pods.go:89] "kube-apiserver-functional-20220602101615-7689" [2629eef3-b21f-475b-bbc5-0459910ee01d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0602 10:17:43.543705 8182 system_pods.go:89] "kube-controller-manager-functional-20220602101615-7689" [4dc96d50-ab16-49c8-bc29-5ec1ff634117] Running
I0602 10:17:43.543708 8182 system_pods.go:89] "kube-proxy-pbh7c" [42a1cf5c-8a5b-4c27-af07-3715bdec9df6] Running
I0602 10:17:43.543710 8182 system_pods.go:89] "kube-scheduler-functional-20220602101615-7689" [068da6f3-b91f-4add-894f-eeb93df9105f] Running
I0602 10:17:43.543713 8182 system_pods.go:89] "storage-provisioner" [65bd2b64-e2ad-42e4-ac30-731c3241a5a9] Running
I0602 10:17:43.543717 8182 system_pods.go:126] duration metric: took 3.11047ms to wait for k8s-apps to be running ...
I0602 10:17:43.543723 8182 system_svc.go:44] waiting for kubelet service to be running ....
I0602 10:17:43.543772 8182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0602 10:17:43.553321 8182 system_svc.go:56] duration metric: took 9.595093ms WaitForService to wait for kubelet.
I0602 10:17:43.553332 8182 kubeadm.go:572] duration metric: took 7.792982948s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0602 10:17:43.553344 8182 node_conditions.go:102] verifying NodePressure condition ...
I0602 10:17:43.555217 8182 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0602 10:17:43.555223 8182 node_conditions.go:123] node cpu capacity is 2
I0602 10:17:43.555228 8182 node_conditions.go:105] duration metric: took 1.882016ms to run NodePressure ...
I0602 10:17:43.555233 8182 start.go:213] waiting for startup goroutines ...
I0602 10:17:43.584848 8182 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
I0602 10:17:43.623430 8182 out.go:177] * Done! kubectl is now configured to use "functional-20220602101615-7689" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Thu 2022-06-02 17:16:24 UTC, ends at Thu 2022-06-02 17:17:44 UTC. --
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.300822543Z" level=info msg="cleaning up dead shim"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2261]: time="2022-06-02T17:17:31.301034539Z" level=info msg="ignoring event" container=df2bcb9a94f95f28b7a04cc66977bf150628df5587e7a76183c6a805ac282b91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.308102214Z" level=warning msg="cleanup warnings time=\"2022-06-02T17:17:31Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6690 runtime=io.containerd.runc.v2\n"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2261]: time="2022-06-02T17:17:31.939292347Z" level=info msg="ignoring event" container=6371bd31eae117e6937daeeadb4cdcae391692a52e6814601da80a70f128d4fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.940217502Z" level=info msg="shim disconnected" id=6371bd31eae117e6937daeeadb4cdcae391692a52e6814601da80a70f128d4fc
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.940262336Z" level=warning msg="cleaning up after shim disconnected" id=6371bd31eae117e6937daeeadb4cdcae391692a52e6814601da80a70f128d4fc namespace=moby
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.940271010Z" level=info msg="cleaning up dead shim"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.950655453Z" level=info msg="shim disconnected" id=dbbc05eea9ad4adcb405c1e9b2697b4e8e0c88b61d0d8a39da59c3b62e90a3cf
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.950713562Z" level=warning msg="cleaning up after shim disconnected" id=dbbc05eea9ad4adcb405c1e9b2697b4e8e0c88b61d0d8a39da59c3b62e90a3cf namespace=moby
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.950722254Z" level=info msg="cleaning up dead shim"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2261]: time="2022-06-02T17:17:31.950856036Z" level=info msg="ignoring event" container=dbbc05eea9ad4adcb405c1e9b2697b4e8e0c88b61d0d8a39da59c3b62e90a3cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.957563355Z" level=warning msg="cleanup warnings time=\"2022-06-02T17:17:31Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6718 runtime=io.containerd.runc.v2\n"
Jun 02 17:17:31 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:31.965082524Z" level=warning msg="cleanup warnings time=\"2022-06-02T17:17:31Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6733 runtime=io.containerd.runc.v2\n"
Jun 02 17:17:33 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:33.650575726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 02 17:17:33 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:33.650641592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 02 17:17:33 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:33.650651285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 02 17:17:33 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:33.651334552Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ecdb32a6035dbaa348167f10cc767622152b3166294dae28e899b7abf321b2c8 pid=6761 runtime=io.containerd.runc.v2
Jun 02 17:17:37 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:37.873561958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 02 17:17:37 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:37.873685100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 02 17:17:37 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:37.873693845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 02 17:17:37 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:37.874334914Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/25e985f7fdfadbad6918eddfaf011b2021365b17262ec51d33968896f426e3ca pid=6862 runtime=io.containerd.runc.v2
Jun 02 17:17:38 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:38.280161581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jun 02 17:17:38 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:38.280205641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jun 02 17:17:38 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:38.280215726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jun 02 17:17:38 functional-20220602101615-7689 dockerd[2267]: time="2022-06-02T17:17:38.280581850Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4b88af8ca2ba50cb9114f5cff2ffc12d2162864725288227f2db82ca615f6fd1 pid=6924 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4b88af8ca2ba5 a4ca41631cc7a 6 seconds ago Running coredns 1 25e985f7fdfad
ecdb32a6035db 8fa62c12256df 11 seconds ago Running kube-apiserver 1 547db3c5a3efd
df2bcb9a94f95 8fa62c12256df 14 seconds ago Exited kube-apiserver 0 547db3c5a3efd
3c3223c77dbe7 6e38f40d628db 18 seconds ago Running storage-provisioner 1 54e5c0aa6fb94
a28ffab3d2b65 595f327f224a4 19 seconds ago Running kube-scheduler 1 008e54c7b5caa
de68258b19ed3 4c03754524064 20 seconds ago Running kube-proxy 1 2e4350510fd37
de577d0a66880 25f8c7f3da61c 20 seconds ago Running etcd 1 a10bee8c97b95
121776f0a737e df7b72818ad2e 20 seconds ago Running kube-controller-manager 1 8718fcd7d89d5
d6dc4f8ab1fbb 6e38f40d628db 41 seconds ago Exited storage-provisioner 0 c87186e6c0f67
d12d254c553e5 a4ca41631cc7a 42 seconds ago Exited coredns 0 ba819c9c48d5b
64dff3579d0ee 4c03754524064 42 seconds ago Exited kube-proxy 0 28e50ad5f2674
c877ee90061f2 25f8c7f3da61c About a minute ago Exited etcd 0 27824dc20d356
2e20450262aa7 595f327f224a4 About a minute ago Exited kube-scheduler 0 343b86a926cee
1582f1012bc6d df7b72818ad2e About a minute ago Exited kube-controller-manager 0 f1500ac0c5216
*
* ==> coredns [4b88af8ca2ba] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> coredns [d12d254c553e] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: functional-20220602101615-7689
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220602101615-7689
kubernetes.io/os=linux
minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
minikube.k8s.io/name=functional-20220602101615-7689
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_06_02T10_16_48_0700
minikube.k8s.io/version=v1.26.0-beta.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Jun 2022 17:16:45 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220602101615-7689
AcquireTime: <unset>
RenewTime: Thu, 02 Jun 2022 17:17:40 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Jun 2022 17:17:30 +0000 Thu, 02 Jun 2022 17:16:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Jun 2022 17:17:30 +0000 Thu, 02 Jun 2022 17:16:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Jun 2022 17:17:30 +0000 Thu, 02 Jun 2022 17:16:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Jun 2022 17:17:30 +0000 Thu, 02 Jun 2022 17:17:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.47
Hostname: functional-20220602101615-7689
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935108Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935108Ki
pods: 110
System Info:
Machine ID: 04f1fedce0184ac9b36f1944bf7c0efe
System UUID: b61311ec-0000-0000-ba6a-f01898ef957c
Boot ID: a8d42b3d-2502-44c5-bece-4587aed17a0f
Kernel Version: 4.19.235
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.16
Kubelet Version: v1.23.6
Kube-Proxy Version: v1.23.6
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-64897985d-4lv74 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 44s
kube-system etcd-functional-20220602101615-7689 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 59s
kube-system kube-apiserver-functional-20220602101615-7689 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9s
kube-system kube-controller-manager-functional-20220602101615-7689 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 57s
kube-system kube-proxy-pbh7c 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 44s
kube-system kube-scheduler-functional-20220602101615-7689 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 57s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 43s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 42s kube-proxy
Normal Starting 16s kube-proxy
Normal NodeHasNoDiskPressure 65s (x5 over 65s) kubelet Node functional-20220602101615-7689 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 65s (x5 over 65s) kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 65s (x5 over 65s) kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 57s kubelet Node functional-20220602101615-7689 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 57s kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 57s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 57s kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientMemory
Normal Starting 57s kubelet Starting kubelet.
Normal NodeReady 47s kubelet Node functional-20220602101615-7689 status is now: NodeReady
Normal Starting 16s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 16s kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16s kubelet Node functional-20220602101615-7689 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16s kubelet Node functional-20220602101615-7689 status is now: NodeHasSufficientPID
Normal NodeNotReady 16s kubelet Node functional-20220602101615-7689 status is now: NodeNotReady
Normal NodeAllocatableEnforced 15s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 15s kubelet Node functional-20220602101615-7689 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.007653] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.458419] systemd-fstab-generator[1117]: Ignoring "noauto" for root device
[ +0.042950] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.533749] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1643 comm=systemd-network
[ +0.378615] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +2.286485] systemd-fstab-generator[2023]: Ignoring "noauto" for root device
[ +0.081225] systemd-fstab-generator[2034]: Ignoring "noauto" for root device
[ +6.521338] systemd-fstab-generator[2251]: Ignoring "noauto" for root device
[ +1.518685] kauditd_printk_skb: 68 callbacks suppressed
[ +0.226384] systemd-fstab-generator[2413]: Ignoring "noauto" for root device
[ +0.073006] systemd-fstab-generator[2424]: Ignoring "noauto" for root device
[ +0.081347] systemd-fstab-generator[2435]: Ignoring "noauto" for root device
[ +3.557244] systemd-fstab-generator[2678]: Ignoring "noauto" for root device
[ +8.212246] systemd-fstab-generator[3385]: Ignoring "noauto" for root device
[Jun 2 17:17] kauditd_printk_skb: 155 callbacks suppressed
[ +6.027746] systemd-fstab-generator[4460]: Ignoring "noauto" for root device
[ +0.150847] systemd-fstab-generator[4471]: Ignoring "noauto" for root device
[ +0.133945] systemd-fstab-generator[4482]: Ignoring "noauto" for root device
[ +0.949181] kauditd_printk_skb: 68 callbacks suppressed
[ +13.709164] systemd-fstab-generator[5197]: Ignoring "noauto" for root device
[ +0.120736] systemd-fstab-generator[5208]: Ignoring "noauto" for root device
[ +0.128948] systemd-fstab-generator[5219]: Ignoring "noauto" for root device
[ +1.047900] kauditd_printk_skb: 5 callbacks suppressed
[ +5.737052] systemd-fstab-generator[6384]: Ignoring "noauto" for root device
*
* ==> etcd [c877ee90061f] <==
* {"level":"info","ts":"2022-06-02T17:16:43.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became pre-candidate at term 1"}
{"level":"info","ts":"2022-06-02T17:16:43.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 received MsgPreVoteResp from 8a022d7680d5edb4 at term 1"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became candidate at term 2"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 received MsgVoteResp from 8a022d7680d5edb4 at term 2"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became leader at term 2"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8a022d7680d5edb4 elected leader 8a022d7680d5edb4 at term 2"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8a022d7680d5edb4","local-member-attributes":"{Name:functional-20220602101615-7689 ClientURLs:[https://192.168.64.47:2379]}","request-path":"/0/members/8a022d7680d5edb4/attributes","cluster-id":"972cfb35c9b748c0","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-02T17:16:43.391Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-02T17:16:43.392Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-02T17:16:43.392Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:16:43.393Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-02T17:16:43.393Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"972cfb35c9b748c0","local-member-id":"8a022d7680d5edb4","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:16:43.393Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:16:43.393Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:16:43.398Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.47:2379"}
{"level":"info","ts":"2022-06-02T17:16:43.399Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-02T17:16:43.399Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-06-02T17:17:23.417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-06-02T17:17:23.417Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602101615-7689","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.47:2380"],"advertise-client-urls":["https://192.168.64.47:2379"]}
WARNING: 2022/06/02 17:17:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/06/02 17:17:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.47:2379 192.168.64.47:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.47:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-06-02T17:17:23.428Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8a022d7680d5edb4","current-leader-member-id":"8a022d7680d5edb4"}
{"level":"info","ts":"2022-06-02T17:17:23.429Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.64.47:2380"}
{"level":"info","ts":"2022-06-02T17:17:23.430Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.64.47:2380"}
{"level":"info","ts":"2022-06-02T17:17:23.430Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602101615-7689","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.47:2380"],"advertise-client-urls":["https://192.168.64.47:2379"]}
*
* ==> etcd [de577d0a6688] <==
* {"level":"info","ts":"2022-06-02T17:17:25.482Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"8a022d7680d5edb4","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-06-02T17:17:25.482Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-06-02T17:17:25.483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 switched to configuration voters=(9944560914178370996)"}
{"level":"info","ts":"2022-06-02T17:17:25.483Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"972cfb35c9b748c0","local-member-id":"8a022d7680d5edb4","added-peer-id":"8a022d7680d5edb4","added-peer-peer-urls":["https://192.168.64.47:2380"]}
{"level":"info","ts":"2022-06-02T17:17:25.483Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"972cfb35c9b748c0","local-member-id":"8a022d7680d5edb4","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:17:25.483Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-06-02T17:17:25.486Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-06-02T17:17:25.486Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8a022d7680d5edb4","initial-advertise-peer-urls":["https://192.168.64.47:2380"],"listen-peer-urls":["https://192.168.64.47:2380"],"advertise-client-urls":["https://192.168.64.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-06-02T17:17:25.486Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-06-02T17:17:25.486Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.64.47:2380"}
{"level":"info","ts":"2022-06-02T17:17:25.486Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.64.47:2380"}
{"level":"info","ts":"2022-06-02T17:17:26.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 is starting a new election at term 2"}
{"level":"info","ts":"2022-06-02T17:17:26.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became pre-candidate at term 2"}
{"level":"info","ts":"2022-06-02T17:17:26.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 received MsgPreVoteResp from 8a022d7680d5edb4 at term 2"}
{"level":"info","ts":"2022-06-02T17:17:26.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became candidate at term 3"}
{"level":"info","ts":"2022-06-02T17:17:26.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 received MsgVoteResp from 8a022d7680d5edb4 at term 3"}
{"level":"info","ts":"2022-06-02T17:17:26.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8a022d7680d5edb4 became leader at term 3"}
{"level":"info","ts":"2022-06-02T17:17:26.771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8a022d7680d5edb4 elected leader 8a022d7680d5edb4 at term 3"}
{"level":"info","ts":"2022-06-02T17:17:26.772Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8a022d7680d5edb4","local-member-attributes":"{Name:functional-20220602101615-7689 ClientURLs:[https://192.168.64.47:2379]}","request-path":"/0/members/8a022d7680d5edb4/attributes","cluster-id":"972cfb35c9b748c0","publish-timeout":"7s"}
{"level":"info","ts":"2022-06-02T17:17:26.772Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-02T17:17:26.773Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-06-02T17:17:26.773Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-06-02T17:17:26.774Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.47:2379"}
{"level":"info","ts":"2022-06-02T17:17:26.774Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-06-02T17:17:26.774Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 17:17:45 up 1 min, 0 users, load average: 1.08, 0.39, 0.14
Linux functional-20220602101615-7689 4.19.235 #1 SMP Fri May 27 20:55:39 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [df2bcb9a94f9] <==
* I0602 17:17:31.279244 1 server.go:565] external host was not specified, using 192.168.64.47
I0602 17:17:31.279832 1 server.go:172] Version: v1.23.6
E0602 17:17:31.280203 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-apiserver [ecdb32a6035d] <==
* I0602 17:17:35.753867 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0602 17:17:35.755673 1 controller.go:85] Starting OpenAPI controller
I0602 17:17:35.757932 1 naming_controller.go:291] Starting NamingConditionController
I0602 17:17:35.757941 1 establishing_controller.go:76] Starting EstablishingController
I0602 17:17:35.757947 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0602 17:17:35.757951 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0602 17:17:35.757955 1 crd_finalizer.go:266] Starting CRDFinalizer
I0602 17:17:35.758099 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0602 17:17:35.821173 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0602 17:17:35.767855 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0602 17:17:35.767863 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0602 17:17:35.871241 1 cache.go:39] Caches are synced for autoregister controller
I0602 17:17:35.871371 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0602 17:17:35.871543 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0602 17:17:35.875402 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0602 17:17:35.875417 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0602 17:17:35.903632 1 shared_informer.go:247] Caches are synced for node_authorizer
I0602 17:17:35.922015 1 shared_informer.go:247] Caches are synced for crd-autoregister
E0602 17:17:35.932720 1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I0602 17:17:36.727402 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0602 17:17:36.732886 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0602 17:17:36.756395 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0602 17:17:40.101093 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0602 17:17:41.065965 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0602 17:17:41.127050 1 controller.go:611] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [121776f0a737] <==
* I0602 17:17:40.955348 1 shared_informer.go:247] Caches are synced for ephemeral
I0602 17:17:40.956263 1 shared_informer.go:247] Caches are synced for TTL after finished
I0602 17:17:40.961113 1 shared_informer.go:247] Caches are synced for deployment
I0602 17:17:40.970217 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0602 17:17:40.971396 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0602 17:17:40.971441 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0602 17:17:40.971546 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0602 17:17:40.973420 1 shared_informer.go:247] Caches are synced for TTL
I0602 17:17:40.975941 1 shared_informer.go:247] Caches are synced for PVC protection
I0602 17:17:40.984748 1 shared_informer.go:247] Caches are synced for cronjob
I0602 17:17:40.984858 1 shared_informer.go:247] Caches are synced for attach detach
I0602 17:17:40.986013 1 shared_informer.go:247] Caches are synced for crt configmap
I0602 17:17:40.988427 1 shared_informer.go:247] Caches are synced for daemon sets
I0602 17:17:40.991226 1 shared_informer.go:247] Caches are synced for persistent volume
I0602 17:17:41.057437 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0602 17:17:41.061741 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0602 17:17:41.117212 1 shared_informer.go:247] Caches are synced for endpoint
I0602 17:17:41.128531 1 shared_informer.go:247] Caches are synced for ReplicationController
I0602 17:17:41.168947 1 shared_informer.go:247] Caches are synced for disruption
I0602 17:17:41.169068 1 disruption.go:371] Sending events to api server.
I0602 17:17:41.176951 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:17:41.208236 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:17:41.605797 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:17:41.659906 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:17:41.659964 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [1582f1012bc6] <==
* I0602 17:17:00.076040 1 shared_informer.go:247] Caches are synced for ReplicationController
I0602 17:17:00.077271 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0602 17:17:00.077359 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0602 17:17:00.077639 1 shared_informer.go:247] Caches are synced for PV protection
I0602 17:17:00.077667 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0602 17:17:00.083760 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0602 17:17:00.091627 1 shared_informer.go:247] Caches are synced for PVC protection
I0602 17:17:00.103990 1 shared_informer.go:247] Caches are synced for ephemeral
I0602 17:17:00.106238 1 shared_informer.go:247] Caches are synced for endpoint
I0602 17:17:00.128762 1 shared_informer.go:247] Caches are synced for persistent volume
I0602 17:17:00.168864 1 shared_informer.go:247] Caches are synced for expand
I0602 17:17:00.171698 1 shared_informer.go:247] Caches are synced for attach detach
I0602 17:17:00.253739 1 shared_informer.go:247] Caches are synced for daemon sets
I0602 17:17:00.276725 1 shared_informer.go:247] Caches are synced for stateful set
I0602 17:17:00.282612 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:17:00.328327 1 shared_informer.go:247] Caches are synced for resource quota
I0602 17:17:00.632522 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
I0602 17:17:00.645533 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0602 17:17:00.715332 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:17:00.726601 1 shared_informer.go:247] Caches are synced for garbage collector
I0602 17:17:00.726631 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0602 17:17:01.034299 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pbh7c"
I0602 17:17:01.088036 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-sddlg"
I0602 17:17:01.092819 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-4lv74"
I0602 17:17:01.130234 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-sddlg"
*
* ==> kube-proxy [64dff3579d0e] <==
* I0602 17:17:02.665733 1 node.go:163] Successfully retrieved node IP: 192.168.64.47
I0602 17:17:02.666028 1 server_others.go:138] "Detected node IP" address="192.168.64.47"
I0602 17:17:02.666604 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0602 17:17:02.703402 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0602 17:17:02.703524 1 server_others.go:206] "Using iptables Proxier"
I0602 17:17:02.703733 1 server.go:656] "Version info" version="v1.23.6"
I0602 17:17:02.704426 1 config.go:317] "Starting service config controller"
I0602 17:17:02.704437 1 shared_informer.go:240] Waiting for caches to sync for service config
I0602 17:17:02.704472 1 config.go:226] "Starting endpoint slice config controller"
I0602 17:17:02.704477 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0602 17:17:02.804707 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0602 17:17:02.804837 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-proxy [de68258b19ed] <==
* I0602 17:17:28.500226 1 node.go:163] Successfully retrieved node IP: 192.168.64.47
I0602 17:17:28.500285 1 server_others.go:138] "Detected node IP" address="192.168.64.47"
I0602 17:17:28.500302 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0602 17:17:28.615491 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0602 17:17:28.615586 1 server_others.go:206] "Using iptables Proxier"
I0602 17:17:28.615753 1 server.go:656] "Version info" version="v1.23.6"
I0602 17:17:28.617702 1 config.go:226] "Starting endpoint slice config controller"
I0602 17:17:28.617713 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0602 17:17:28.617765 1 config.go:317] "Starting service config controller"
I0602 17:17:28.617769 1 shared_informer.go:240] Waiting for caches to sync for service config
I0602 17:17:28.718640 1 shared_informer.go:247] Caches are synced for service config
I0602 17:17:28.718672 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [2e20450262aa] <==
* W0602 17:16:45.129750 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0602 17:16:45.129783 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0602 17:16:45.129950 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0602 17:16:45.130012 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0602 17:16:45.130156 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0602 17:16:45.130165 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0602 17:16:45.130319 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0602 17:16:45.130349 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0602 17:16:45.130618 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0602 17:16:45.130649 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0602 17:16:45.988352 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0602 17:16:45.988397 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0602 17:16:45.992240 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0602 17:16:45.992420 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0602 17:16:46.131678 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0602 17:16:46.131846 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0602 17:16:46.245863 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0602 17:16:46.245954 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0602 17:16:46.250827 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0602 17:16:46.250983 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0602 17:16:46.279721 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0602 17:16:49.214066 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0602 17:17:23.461036 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0602 17:17:23.461537 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
I0602 17:17:23.461553 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
*
* ==> kube-scheduler [a28ffab3d2b6] <==
* W0602 17:17:28.459389 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0602 17:17:28.459417 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0602 17:17:28.459517 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0602 17:17:28.459546 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0602 17:17:28.459586 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0602 17:17:28.459613 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0602 17:17:28.459695 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0602 17:17:28.459723 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0602 17:17:28.459771 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0602 17:17:28.459800 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0602 17:17:28.459828 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0602 17:17:28.459853 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0602 17:17:29.528836 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0602 17:17:35.822553 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0602 17:17:35.822772 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0602 17:17:35.822886 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0602 17:17:35.822963 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0602 17:17:35.822993 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0602 17:17:35.823010 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
E0602 17:17:35.823030 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
E0602 17:17:35.823051 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0602 17:17:35.823068 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0602 17:17:35.823088 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0602 17:17:35.828287 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0602 17:17:35.831153 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
*
* ==> kubelet <==
* -- Journal begins at Thu 2022-06-02 17:16:24 UTC, ends at Thu 2022-06-02 17:17:46 UTC. --
Jun 02 17:17:33 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:33.893181 6390 kubelet.go:1808] failed to "KillContainer" for "kube-apiserver" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: dbbc05eea9ad4adcb405c1e9b2697b4e8e0c88b61d0d8a39da59c3b62e90a3cf"
Jun 02 17:17:33 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:33.893251 6390 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: dbbc05eea9ad4adcb405c1e9b2697b4e8e0c88b61d0d8a39da59c3b62e90a3cf\"" pod="kube-system/kube-apiserver-functional-20220602101615-7689" podUID=d138a3ca8854751d95e9b40a43d5cc40
Jun 02 17:17:34 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:34.277223 6390 kubelet.go:1724] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220602101615-7689" podUID=6d298576-fa23-4014-811b-53705349c461
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.820630 6390 projected.go:199] Error preparing data for projected volume kube-api-access-44t2j for pod kube-system/storage-provisioner: failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.821458 6390 projected.go:199] Error preparing data for projected volume kube-api-access-gvmb7 for pod kube-system/kube-proxy-pbh7c: failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.821573 6390 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/65bd2b64-e2ad-42e4-ac30-731c3241a5a9-kube-api-access-44t2j podName:65bd2b64-e2ad-42e4-ac30-731c3241a5a9 nodeName:}" failed. No retries permitted until 2022-06-02 17:17:36.821560847 +0000 UTC m=+7.213225748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-44t2j" (UniqueName: "kubernetes.io/projected/65bd2b64-e2ad-42e4-ac30-731c3241a5a9-kube-api-access-44t2j") pod "storage-provisioner" (UID: "65bd2b64-e2ad-42e4-ac30-731c3241a5a9") : failed to fetch token: serviceaccounts "storage-provisioner" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.821883 6390 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-api-access-gvmb7 podName:42a1cf5c-8a5b-4c27-af07-3715bdec9df6 nodeName:}" failed. No retries permitted until 2022-06-02 17:17:36.821874441 +0000 UTC m=+7.213539340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gvmb7" (UniqueName: "kubernetes.io/projected/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-api-access-gvmb7") pod "kube-proxy-pbh7c" (UID: "42a1cf5c-8a5b-4c27-af07-3715bdec9df6") : failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: W0602 17:17:35.821646 6390 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.821974 6390 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: W0602 17:17:35.821718 6390 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.822062 6390 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: W0602 17:17:35.821741 6390 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.822150 6390 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20220602101615-7689" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.821782 6390 projected.go:199] Error preparing data for projected volume kube-api-access-64bgg for pod kube-system/coredns-64897985d-4lv74: failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.822247 6390 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/66c4d521-7d3f-4f49-b9b4-8310a79d576f-kube-api-access-64bgg podName:66c4d521-7d3f-4f49-b9b4-8310a79d576f nodeName:}" failed. No retries permitted until 2022-06-02 17:17:36.822240582 +0000 UTC m=+7.213905484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-64bgg" (UniqueName: "kubernetes.io/projected/66c4d521-7d3f-4f49-b9b4-8310a79d576f-kube-api-access-64bgg") pod "coredns-64897985d-4lv74" (UID: "66c4d521-7d3f-4f49-b9b4-8310a79d576f") : failed to fetch token: serviceaccounts "coredns" is forbidden: User "system:node:functional-20220602101615-7689" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20220602101615-7689' and this object
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.899312 6390 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:35.899374 6390 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-proxy podName:42a1cf5c-8a5b-4c27-af07-3715bdec9df6 nodeName:}" failed. No retries permitted until 2022-06-02 17:17:37.899362104 +0000 UTC m=+8.291027006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-proxy") pod "kube-proxy-pbh7c" (UID: "42a1cf5c-8a5b-4c27-af07-3715bdec9df6") : failed to sync configmap cache: timed out waiting for the condition
Jun 02 17:17:35 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:35.954808 6390 kubelet.go:1729] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20220602101615-7689"
Jun 02 17:17:36 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:36.284773 6390 kubelet.go:1724] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220602101615-7689" podUID=6d298576-fa23-4014-811b-53705349c461
Jun 02 17:17:37 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:37.293106 6390 kubelet.go:1724] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220602101615-7689" podUID=6d298576-fa23-4014-811b-53705349c461
Jun 02 17:17:38 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:38.244390 6390 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-4lv74 through plugin: invalid network status for"
Jun 02 17:17:38 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:38.307252 6390 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-4lv74 through plugin: invalid network status for"
Jun 02 17:17:38 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:38.922205 6390 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
Jun 02 17:17:38 functional-20220602101615-7689 kubelet[6390]: E0602 17:17:38.922535 6390 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-proxy podName:42a1cf5c-8a5b-4c27-af07-3715bdec9df6 nodeName:}" failed. No retries permitted until 2022-06-02 17:17:42.922508947 +0000 UTC m=+13.314173867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/42a1cf5c-8a5b-4c27-af07-3715bdec9df6-kube-proxy") pod "kube-proxy-pbh7c" (UID: "42a1cf5c-8a5b-4c27-af07-3715bdec9df6") : failed to sync configmap cache: timed out waiting for the condition
Jun 02 17:17:39 functional-20220602101615-7689 kubelet[6390]: I0602 17:17:39.375997 6390 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-4lv74 through plugin: invalid network status for"
*
* ==> storage-provisioner [3c3223c77dbe] <==
* I0602 17:17:26.365723 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0602 17:17:28.504578 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0602 17:17:28.504628 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0602 17:17:31.961369 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
I0602 17:17:45.912295 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0602 17:17:45.912579 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220602101615-7689_54fb4407-1186-49b7-82f0-486b5013cf9a!
I0602 17:17:45.913290 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a48ec2d-fcfb-4dbe-a574-3389dc69f125", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220602101615-7689_54fb4407-1186-49b7-82f0-486b5013cf9a became leader
I0602 17:17:46.013418 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220602101615-7689_54fb4407-1186-49b7-82f0-486b5013cf9a!
*
* ==> storage-provisioner [d6dc4f8ab1fb] <==
* I0602 17:17:03.347443 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0602 17:17:03.357336 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0602 17:17:03.357383 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0602 17:17:03.365699 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0602 17:17:03.367053 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220602101615-7689_b43712e1-672d-4fdb-8104-c6fc10d6c7be!
I0602 17:17:03.370689 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a48ec2d-fcfb-4dbe-a574-3389dc69f125", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220602101615-7689_b43712e1-672d-4fdb-8104-c6fc10d6c7be became leader
I0602 17:17:03.468115 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220602101615-7689_b43712e1-672d-4fdb-8104-c6fc10d6c7be!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220602101615-7689 -n functional-20220602101615-7689
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220602101615-7689 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220602101615-7689 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220602101615-7689 describe pod : exit status 1 (32.629445ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context functional-20220602101615-7689 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (3.59s)