=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1109 10:13:40.386478 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:15:56.528900 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:16:24.226889 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:16:45.274998 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.280497 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.290758 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.311473 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.351928 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.434212 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.596415 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.918631 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:46.560938 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:47.843248 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:50.405524 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:55.526160 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:17:05.768455 22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.24789552s)
-- stdout --
* [ingress-addon-legacy-101309] minikube v1.28.0 on Darwin 13.0
- MINIKUBE_LOCATION=15331
- KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-101309 in cluster ingress-addon-legacy-101309
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I1109 10:13:09.101222 25528 out.go:296] Setting OutFile to fd 1 ...
I1109 10:13:09.101414 25528 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 10:13:09.101419 25528 out.go:309] Setting ErrFile to fd 2...
I1109 10:13:09.101428 25528 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 10:13:09.101544 25528 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
I1109 10:13:09.102094 25528 out.go:303] Setting JSON to false
I1109 10:13:09.120887 25528 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11564,"bootTime":1668006025,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W1109 10:13:09.120980 25528 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1109 10:13:09.142512 25528 out.go:177] * [ingress-addon-legacy-101309] minikube v1.28.0 on Darwin 13.0
I1109 10:13:09.163984 25528 notify.go:220] Checking for updates...
I1109 10:13:09.185313 25528 out.go:177] - MINIKUBE_LOCATION=15331
I1109 10:13:09.207023 25528 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
I1109 10:13:09.228454 25528 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I1109 10:13:09.250406 25528 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1109 10:13:09.272338 25528 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
I1109 10:13:09.293622 25528 driver.go:365] Setting default libvirt URI to qemu:///system
I1109 10:13:09.353877 25528 docker.go:137] docker version: linux-20.10.20
I1109 10:13:09.354025 25528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1109 10:13:09.494583 25528 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:13:09.418916855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1109 10:13:09.537993 25528 out.go:177] * Using the docker driver based on user configuration
I1109 10:13:09.558792 25528 start.go:282] selected driver: docker
I1109 10:13:09.558810 25528 start.go:808] validating driver "docker" against <nil>
I1109 10:13:09.558829 25528 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1109 10:13:09.561393 25528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1109 10:13:09.701153 25528 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:13:09.627163036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1109 10:13:09.701269 25528 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1109 10:13:09.701414 25528 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1109 10:13:09.722954 25528 out.go:177] * Using Docker Desktop driver with root privileges
I1109 10:13:09.743622 25528 cni.go:95] Creating CNI manager for ""
I1109 10:13:09.743642 25528 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1109 10:13:09.743659 25528 start_flags.go:317] config:
{Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1109 10:13:09.764954 25528 out.go:177] * Starting control plane node ingress-addon-legacy-101309 in cluster ingress-addon-legacy-101309
I1109 10:13:09.806917 25528 cache.go:120] Beginning downloading kic base image for docker with docker
I1109 10:13:09.828664 25528 out.go:177] * Pulling base image ...
I1109 10:13:09.870782 25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1109 10:13:09.870862 25528 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1109 10:13:09.925665 25528 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1109 10:13:09.925688 25528 cache.go:57] Caching tarball of preloaded images
I1109 10:13:09.925910 25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1109 10:13:09.968628 25528 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I1109 10:13:09.979461 25528 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1109 10:13:09.989858 25528 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1109 10:13:09.989876 25528 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1109 10:13:10.072888 25528 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1109 10:13:14.710804 25528 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1109 10:13:14.711005 25528 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1109 10:13:15.319140 25528 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I1109 10:13:15.319427 25528 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json ...
I1109 10:13:15.319456 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json: {Name:mkc6c9654378d90b31df64c0b57677f0797202a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:15.319774 25528 cache.go:208] Successfully downloaded all kic artifacts
I1109 10:13:15.319800 25528 start.go:364] acquiring machines lock for ingress-addon-legacy-101309: {Name:mk793ac2e4d48107a3d3957703e95cafe0d3757c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1109 10:13:15.319955 25528 start.go:368] acquired machines lock for "ingress-addon-legacy-101309" in 148.788µs
I1109 10:13:15.320010 25528 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I1109 10:13:15.320136 25528 start.go:125] createHost starting for "" (driver="docker")
I1109 10:13:15.363919 25528 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1109 10:13:15.364241 25528 start.go:159] libmachine.API.Create for "ingress-addon-legacy-101309" (driver="docker")
I1109 10:13:15.364284 25528 client.go:168] LocalClient.Create starting
I1109 10:13:15.364491 25528 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem
I1109 10:13:15.364575 25528 main.go:134] libmachine: Decoding PEM data...
I1109 10:13:15.364606 25528 main.go:134] libmachine: Parsing certificate...
I1109 10:13:15.364715 25528 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem
I1109 10:13:15.364782 25528 main.go:134] libmachine: Decoding PEM data...
I1109 10:13:15.364805 25528 main.go:134] libmachine: Parsing certificate...
I1109 10:13:15.365771 25528 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-101309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1109 10:13:15.422737 25528 cli_runner.go:211] docker network inspect ingress-addon-legacy-101309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1109 10:13:15.422869 25528 network_create.go:272] running [docker network inspect ingress-addon-legacy-101309] to gather additional debugging logs...
I1109 10:13:15.422894 25528 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-101309
W1109 10:13:15.477007 25528 cli_runner.go:211] docker network inspect ingress-addon-legacy-101309 returned with exit code 1
I1109 10:13:15.477034 25528 network_create.go:275] error running [docker network inspect ingress-addon-legacy-101309]: docker network inspect ingress-addon-legacy-101309: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-101309
I1109 10:13:15.477058 25528 network_create.go:277] output of [docker network inspect ingress-addon-legacy-101309]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-101309
** /stderr **
I1109 10:13:15.477188 25528 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1109 10:13:15.531631 25528 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000490118] misses:0}
I1109 10:13:15.531675 25528 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1109 10:13:15.531692 25528 network_create.go:115] attempt to create docker network ingress-addon-legacy-101309 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1109 10:13:15.531806 25528 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 ingress-addon-legacy-101309
I1109 10:13:15.664612 25528 network_create.go:99] docker network ingress-addon-legacy-101309 192.168.49.0/24 created
I1109 10:13:15.664649 25528 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-101309" container
I1109 10:13:15.664784 25528 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1109 10:13:15.719673 25528 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-101309 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --label created_by.minikube.sigs.k8s.io=true
I1109 10:13:15.775635 25528 oci.go:103] Successfully created a docker volume ingress-addon-legacy-101309
I1109 10:13:15.775775 25528 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-101309-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --entrypoint /usr/bin/test -v ingress-addon-legacy-101309:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1109 10:13:16.226904 25528 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-101309
I1109 10:13:16.226962 25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1109 10:13:16.226977 25528 kic.go:179] Starting extracting preloaded images to volume ...
I1109 10:13:16.227099 25528 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-101309:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1109 10:13:20.696590 25528 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-101309:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.469404475s)
I1109 10:13:20.696615 25528 kic.go:188] duration metric: took 4.469636 seconds to extract preloaded images to volume
I1109 10:13:20.696761 25528 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1109 10:13:20.838465 25528 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-101309 --name ingress-addon-legacy-101309 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --network ingress-addon-legacy-101309 --ip 192.168.49.2 --volume ingress-addon-legacy-101309:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1109 10:13:21.184522 25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Running}}
I1109 10:13:21.242235 25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
I1109 10:13:21.302750 25528 cli_runner.go:164] Run: docker exec ingress-addon-legacy-101309 stat /var/lib/dpkg/alternatives/iptables
I1109 10:13:21.407466 25528 oci.go:144] the created container "ingress-addon-legacy-101309" has a running status.
I1109 10:13:21.407503 25528 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa...
I1109 10:13:21.461181 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1109 10:13:21.461262 25528 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1109 10:13:21.564180 25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
I1109 10:13:21.620204 25528 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1109 10:13:21.620223 25528 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-101309 chown docker:docker /home/docker/.ssh/authorized_keys]
I1109 10:13:21.723780 25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
I1109 10:13:21.779486 25528 machine.go:88] provisioning docker machine ...
I1109 10:13:21.779527 25528 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-101309"
I1109 10:13:21.779640 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:21.835979 25528 main.go:134] libmachine: Using SSH client type: native
I1109 10:13:21.836179 25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil> [] 0s} 127.0.0.1 61702 <nil> <nil>}
I1109 10:13:21.836196 25528 main.go:134] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-101309 && echo "ingress-addon-legacy-101309" | sudo tee /etc/hostname
I1109 10:13:21.961851 25528 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-101309
I1109 10:13:21.961962 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:22.019385 25528 main.go:134] libmachine: Using SSH client type: native
I1109 10:13:22.019543 25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil> [] 0s} 127.0.0.1 61702 <nil> <nil>}
I1109 10:13:22.019561 25528 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-101309' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-101309/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-101309' | sudo tee -a /etc/hosts;
fi
fi
I1109 10:13:22.137077 25528 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1109 10:13:22.137103 25528 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
I1109 10:13:22.137125 25528 ubuntu.go:177] setting up certificates
I1109 10:13:22.137133 25528 provision.go:83] configureAuth start
I1109 10:13:22.137225 25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
I1109 10:13:22.192674 25528 provision.go:138] copyHostCerts
I1109 10:13:22.192720 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
I1109 10:13:22.192779 25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
I1109 10:13:22.192787 25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
I1109 10:13:22.192895 25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
I1109 10:13:22.193073 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
I1109 10:13:22.193110 25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
I1109 10:13:22.193115 25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
I1109 10:13:22.193182 25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
I1109 10:13:22.193327 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
I1109 10:13:22.193374 25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
I1109 10:13:22.193379 25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
I1109 10:13:22.193443 25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
I1109 10:13:22.193568 25528 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-101309 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-101309]
I1109 10:13:22.286686 25528 provision.go:172] copyRemoteCerts
I1109 10:13:22.286750 25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1109 10:13:22.286825 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:22.341999 25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
I1109 10:13:22.427060 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1109 10:13:22.427142 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1109 10:13:22.443455 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
I1109 10:13:22.443551 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I1109 10:13:22.459986 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1109 10:13:22.460077 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1109 10:13:22.476982 25528 provision.go:86] duration metric: configureAuth took 339.837004ms
I1109 10:13:22.476995 25528 ubuntu.go:193] setting minikube options for container-runtime
I1109 10:13:22.477154 25528 config.go:180] Loaded profile config "ingress-addon-legacy-101309": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I1109 10:13:22.477232 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:22.532860 25528 main.go:134] libmachine: Using SSH client type: native
I1109 10:13:22.533019 25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil> [] 0s} 127.0.0.1 61702 <nil> <nil>}
I1109 10:13:22.533031 25528 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1109 10:13:22.651686 25528 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1109 10:13:22.651707 25528 ubuntu.go:71] root file system type: overlay
I1109 10:13:22.651863 25528 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1109 10:13:22.651976 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:22.707804 25528 main.go:134] libmachine: Using SSH client type: native
I1109 10:13:22.707965 25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil> [] 0s} 127.0.0.1 61702 <nil> <nil>}
I1109 10:13:22.708018 25528 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1109 10:13:22.834098 25528 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1109 10:13:22.834205 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:22.889472 25528 main.go:134] libmachine: Using SSH client type: native
I1109 10:13:22.889629 25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil> [] 0s} 127.0.0.1 61702 <nil> <nil>}
I1109 10:13:22.889644 25528 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1109 10:13:23.481016 25528 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-09 18:13:22.836077322 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1109 10:13:23.481036 25528 machine.go:91] provisioned docker machine in 1.701530304s
I1109 10:13:23.481044 25528 client.go:171] LocalClient.Create took 8.116748348s
I1109 10:13:23.481062 25528 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-101309" took 8.116822621s
I1109 10:13:23.481075 25528 start.go:300] post-start starting for "ingress-addon-legacy-101309" (driver="docker")
I1109 10:13:23.481081 25528 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1109 10:13:23.481164 25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1109 10:13:23.481226 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:23.537768 25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
I1109 10:13:23.624306 25528 ssh_runner.go:195] Run: cat /etc/os-release
I1109 10:13:23.628078 25528 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1109 10:13:23.628094 25528 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1109 10:13:23.628101 25528 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1109 10:13:23.628111 25528 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1109 10:13:23.628122 25528 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
I1109 10:13:23.628224 25528 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
I1109 10:13:23.628407 25528 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
I1109 10:13:23.628413 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
I1109 10:13:23.628627 25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1109 10:13:23.635481 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
I1109 10:13:23.651662 25528 start.go:303] post-start completed in 170.577695ms
I1109 10:13:23.652241 25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
I1109 10:13:23.710251 25528 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json ...
I1109 10:13:23.710687 25528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1109 10:13:23.710751 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:23.767412 25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
I1109 10:13:23.857158 25528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1109 10:13:23.861566 25528 start.go:128] duration metric: createHost completed in 8.541420163s
I1109 10:13:23.861583 25528 start.go:83] releasing machines lock for "ingress-addon-legacy-101309", held for 8.541615666s
I1109 10:13:23.861691 25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
I1109 10:13:23.918453 25528 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1109 10:13:23.918456 25528 ssh_runner.go:195] Run: systemctl --version
I1109 10:13:23.918544 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:23.918551 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:23.977493 25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
I1109 10:13:23.979074 25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
I1109 10:13:24.316598 25528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1109 10:13:24.326851 25528 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1109 10:13:24.326917 25528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1109 10:13:24.335827 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1109 10:13:24.348484 25528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1109 10:13:24.414340 25528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1109 10:13:24.480415 25528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1109 10:13:24.544139 25528 ssh_runner.go:195] Run: sudo systemctl restart docker
I1109 10:13:24.743380 25528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1109 10:13:24.772198 25528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1109 10:13:24.821765 25528 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
I1109 10:13:24.821974 25528 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-101309 dig +short host.docker.internal
I1109 10:13:24.932535 25528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I1109 10:13:24.932639 25528 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I1109 10:13:24.937109 25528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1109 10:13:24.947231 25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
I1109 10:13:25.005746 25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1109 10:13:25.005843 25528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1109 10:13:25.029107 25528 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1109 10:13:25.029124 25528 docker.go:543] Images already preloaded, skipping extraction
I1109 10:13:25.029230 25528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1109 10:13:25.051926 25528 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1109 10:13:25.051949 25528 cache_images.go:84] Images are preloaded, skipping loading
I1109 10:13:25.052038 25528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1109 10:13:25.117557 25528 cni.go:95] Creating CNI manager for ""
I1109 10:13:25.117571 25528 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1109 10:13:25.117591 25528 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1109 10:13:25.117611 25528 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-101309 NodeName:ingress-addon-legacy-101309 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1109 10:13:25.117745 25528 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-101309"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1109 10:13:25.117831 25528 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-101309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1109 10:13:25.117903 25528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I1109 10:13:25.125265 25528 binaries.go:44] Found k8s binaries, skipping transfer
I1109 10:13:25.125331 25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1109 10:13:25.132267 25528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I1109 10:13:25.144838 25528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I1109 10:13:25.157428 25528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I1109 10:13:25.170204 25528 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1109 10:13:25.173777 25528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1109 10:13:25.183744 25528 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309 for IP: 192.168.49.2
I1109 10:13:25.183887 25528 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
I1109 10:13:25.183958 25528 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
I1109 10:13:25.184012 25528 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key
I1109 10:13:25.184029 25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt with IP's: []
I1109 10:13:25.422707 25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt ...
I1109 10:13:25.422718 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt: {Name:mkdef0d2eb2470e653103bc9d5f11ae902530f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.423085 25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key ...
I1109 10:13:25.423093 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key: {Name:mka010c6bec794b172cc3a5cd8ba54b4a128659e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.423354 25528 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2
I1109 10:13:25.423392 25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1109 10:13:25.744891 25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 ...
I1109 10:13:25.744905 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2: {Name:mk2ba35356c78eeeb18d6c2a372b94de0951c370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.745263 25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2 ...
I1109 10:13:25.745274 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2: {Name:mk7c4631a4ddf056c25e2d12b257eca71e02df48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.745509 25528 certs.go:320] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt
I1109 10:13:25.745674 25528 certs.go:324] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key
I1109 10:13:25.745913 25528 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key
I1109 10:13:25.745932 25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt with IP's: []
I1109 10:13:25.789785 25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt ...
I1109 10:13:25.789793 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt: {Name:mke30734f39a6e47d99edf1510345a8bcda9e417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.790068 25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key ...
I1109 10:13:25.790075 25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key: {Name:mkca6ffd881ccb7fa57831a0459aa74b09f8932f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1109 10:13:25.790400 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1109 10:13:25.790434 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1109 10:13:25.790458 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1109 10:13:25.790481 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1109 10:13:25.790544 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1109 10:13:25.790583 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1109 10:13:25.790623 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1109 10:13:25.790646 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1109 10:13:25.790768 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
W1109 10:13:25.790817 25528 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
I1109 10:13:25.790829 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
I1109 10:13:25.790909 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
I1109 10:13:25.790941 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
I1109 10:13:25.790974 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
I1109 10:13:25.791086 25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
I1109 10:13:25.791125 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
I1109 10:13:25.791149 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1109 10:13:25.791168 25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
I1109 10:13:25.791690 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1109 10:13:25.809822 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1109 10:13:25.826659 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1109 10:13:25.843445 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1109 10:13:25.860054 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1109 10:13:25.876628 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1109 10:13:25.893647 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1109 10:13:25.910194 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1109 10:13:25.926942 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
I1109 10:13:25.943733 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1109 10:13:25.960313 25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
I1109 10:13:25.977347 25528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1109 10:13:25.989812 25528 ssh_runner.go:195] Run: openssl version
I1109 10:13:25.994949 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1109 10:13:26.002671 25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1109 10:13:26.006604 25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 9 18:04 /usr/share/ca-certificates/minikubeCA.pem
I1109 10:13:26.006653 25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1109 10:13:26.011610 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1109 10:13:26.019250 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
I1109 10:13:26.026857 25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
I1109 10:13:26.030625 25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 9 18:08 /usr/share/ca-certificates/22868.pem
I1109 10:13:26.030676 25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
I1109 10:13:26.035875 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
I1109 10:13:26.043410 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
I1109 10:13:26.051480 25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
I1109 10:13:26.055141 25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 9 18:08 /usr/share/ca-certificates/228682.pem
I1109 10:13:26.055198 25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
I1109 10:13:26.060002 25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
I1109 10:13:26.067450 25528 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1109 10:13:26.067556 25528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1109 10:13:26.089010 25528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1109 10:13:26.096445 25528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1109 10:13:26.103223 25528 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1109 10:13:26.103296 25528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1109 10:13:26.110629 25528 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1109 10:13:26.110657 25528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1109 10:13:26.156695 25528 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1109 10:13:26.156828 25528 kubeadm.go:317] [preflight] Running pre-flight checks
I1109 10:13:26.438955 25528 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1109 10:13:26.439043 25528 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1109 10:13:26.439126 25528 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1109 10:13:26.649727 25528 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1109 10:13:26.650564 25528 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1109 10:13:26.650647 25528 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1109 10:13:26.720455 25528 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1109 10:13:26.763721 25528 out.go:204] - Generating certificates and keys ...
I1109 10:13:26.763823 25528 kubeadm.go:317] [certs] Using existing ca certificate authority
I1109 10:13:26.763888 25528 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1109 10:13:26.843268 25528 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1109 10:13:26.957333 25528 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1109 10:13:27.041389 25528 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1109 10:13:27.214891 25528 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1109 10:13:27.335405 25528 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1109 10:13:27.335520 25528 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1109 10:13:27.434594 25528 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1109 10:13:27.434698 25528 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1109 10:13:27.605531 25528 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1109 10:13:27.831674 25528 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1109 10:13:28.114407 25528 kubeadm.go:317] [certs] Generating "sa" key and public key
I1109 10:13:28.114686 25528 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1109 10:13:28.375328 25528 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1109 10:13:28.881927 25528 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1109 10:13:29.051996 25528 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1109 10:13:29.170501 25528 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1109 10:13:29.171502 25528 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1109 10:13:29.192631 25528 out.go:204] - Booting up control plane ...
I1109 10:13:29.192839 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1109 10:13:29.193025 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1109 10:13:29.193159 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1109 10:13:29.193348 25528 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1109 10:13:29.193649 25528 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1109 10:14:09.154278 25528 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1109 10:14:09.155436 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:14:09.155661 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:14:14.153788 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:14:14.153990 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:14:24.148230 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:14:24.148486 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:14:44.135340 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:14:44.135564 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:15:24.108423 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:15:24.108645 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:15:24.108661 25528 kubeadm.go:317]
I1109 10:15:24.108699 25528 kubeadm.go:317] Unfortunately, an error has occurred:
I1109 10:15:24.108753 25528 kubeadm.go:317] timed out waiting for the condition
I1109 10:15:24.108769 25528 kubeadm.go:317]
I1109 10:15:24.108814 25528 kubeadm.go:317] This error is likely caused by:
I1109 10:15:24.108849 25528 kubeadm.go:317] - The kubelet is not running
I1109 10:15:24.108968 25528 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1109 10:15:24.108974 25528 kubeadm.go:317]
I1109 10:15:24.109082 25528 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1109 10:15:24.109117 25528 kubeadm.go:317] - 'systemctl status kubelet'
I1109 10:15:24.109147 25528 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1109 10:15:24.109152 25528 kubeadm.go:317]
I1109 10:15:24.109348 25528 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1109 10:15:24.109533 25528 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1109 10:15:24.109573 25528 kubeadm.go:317]
I1109 10:15:24.109709 25528 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1109 10:15:24.109785 25528 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1109 10:15:24.109895 25528 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1109 10:15:24.109934 25528 kubeadm.go:317] - 'docker logs CONTAINERID'
I1109 10:15:24.109941 25528 kubeadm.go:317]
I1109 10:15:24.113197 25528 kubeadm.go:317] W1109 18:13:26.162147 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1109 10:15:24.113268 25528 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1109 10:15:24.113371 25528 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1109 10:15:24.113476 25528 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1109 10:15:24.113595 25528 kubeadm.go:317] W1109 18:13:29.185196 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1109 10:15:24.113762 25528 kubeadm.go:317] W1109 18:13:29.186047 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1109 10:15:24.113819 25528 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1109 10:15:24.113880 25528 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1109 10:15:24.114116 25528 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:13:26.162147 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:13:29.185196 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:13:29.186047 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:13:26.162147 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:13:29.185196 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:13:29.186047 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I1109 10:15:24.114147 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1109 10:15:24.529235 25528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1109 10:15:24.538622 25528 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1109 10:15:24.538683 25528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1109 10:15:24.545953 25528 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1109 10:15:24.545973 25528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1109 10:15:24.592545 25528 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1109 10:15:24.592605 25528 kubeadm.go:317] [preflight] Running pre-flight checks
I1109 10:15:24.871613 25528 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1109 10:15:24.871710 25528 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1109 10:15:24.871789 25528 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1109 10:15:25.086220 25528 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1109 10:15:25.087441 25528 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1109 10:15:25.087506 25528 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1109 10:15:25.153322 25528 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1109 10:15:25.174791 25528 out.go:204] - Generating certificates and keys ...
I1109 10:15:25.174858 25528 kubeadm.go:317] [certs] Using existing ca certificate authority
I1109 10:15:25.174940 25528 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1109 10:15:25.175018 25528 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1109 10:15:25.175071 25528 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I1109 10:15:25.175145 25528 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I1109 10:15:25.175205 25528 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I1109 10:15:25.175265 25528 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I1109 10:15:25.175319 25528 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I1109 10:15:25.175382 25528 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1109 10:15:25.175432 25528 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1109 10:15:25.175472 25528 kubeadm.go:317] [certs] Using the existing "sa" key
I1109 10:15:25.175535 25528 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1109 10:15:25.261196 25528 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1109 10:15:25.429331 25528 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1109 10:15:25.695228 25528 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1109 10:15:25.807505 25528 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1109 10:15:25.807998 25528 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1109 10:15:25.829717 25528 out.go:204] - Booting up control plane ...
I1109 10:15:25.829984 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1109 10:15:25.830154 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1109 10:15:25.830274 25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1109 10:15:25.830392 25528 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1109 10:15:25.830677 25528 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1109 10:16:05.790596 25528 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1109 10:16:05.791561 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:16:05.791774 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:16:10.790081 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:16:10.790339 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:16:20.784934 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:16:20.785154 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:16:40.772411 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:16:40.772638 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:17:20.745476 25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1109 10:17:20.745686 25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1109 10:17:20.745701 25528 kubeadm.go:317]
I1109 10:17:20.745751 25528 kubeadm.go:317] Unfortunately, an error has occurred:
I1109 10:17:20.745798 25528 kubeadm.go:317] timed out waiting for the condition
I1109 10:17:20.745804 25528 kubeadm.go:317]
I1109 10:17:20.745841 25528 kubeadm.go:317] This error is likely caused by:
I1109 10:17:20.745890 25528 kubeadm.go:317] - The kubelet is not running
I1109 10:17:20.745998 25528 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1109 10:17:20.746006 25528 kubeadm.go:317]
I1109 10:17:20.746116 25528 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1109 10:17:20.746159 25528 kubeadm.go:317] - 'systemctl status kubelet'
I1109 10:17:20.746191 25528 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1109 10:17:20.746198 25528 kubeadm.go:317]
I1109 10:17:20.746300 25528 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1109 10:17:20.746379 25528 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1109 10:17:20.746387 25528 kubeadm.go:317]
I1109 10:17:20.746498 25528 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1109 10:17:20.746551 25528 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1109 10:17:20.746618 25528 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1109 10:17:20.746651 25528 kubeadm.go:317] - 'docker logs CONTAINERID'
I1109 10:17:20.746657 25528 kubeadm.go:317]
I1109 10:17:20.748946 25528 kubeadm.go:317] W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1109 10:17:20.749028 25528 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1109 10:17:20.749149 25528 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1109 10:17:20.749225 25528 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1109 10:17:20.749345 25528 kubeadm.go:317] W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1109 10:17:20.749432 25528 kubeadm.go:317] W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1109 10:17:20.749500 25528 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1109 10:17:20.749557 25528 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1109 10:17:20.749590 25528 kubeadm.go:398] StartCluster complete in 3m54.682054126s
I1109 10:17:20.749689 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1109 10:17:20.772328 25528 logs.go:274] 0 containers: []
W1109 10:17:20.772341 25528 logs.go:276] No container was found matching "kube-apiserver"
I1109 10:17:20.772427 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1109 10:17:20.794407 25528 logs.go:274] 0 containers: []
W1109 10:17:20.794418 25528 logs.go:276] No container was found matching "etcd"
I1109 10:17:20.794502 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1109 10:17:20.817326 25528 logs.go:274] 0 containers: []
W1109 10:17:20.817337 25528 logs.go:276] No container was found matching "coredns"
I1109 10:17:20.817421 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1109 10:17:20.844735 25528 logs.go:274] 0 containers: []
W1109 10:17:20.844746 25528 logs.go:276] No container was found matching "kube-scheduler"
I1109 10:17:20.844824 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1109 10:17:20.866441 25528 logs.go:274] 0 containers: []
W1109 10:17:20.866453 25528 logs.go:276] No container was found matching "kube-proxy"
I1109 10:17:20.866535 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1109 10:17:20.888238 25528 logs.go:274] 0 containers: []
W1109 10:17:20.888249 25528 logs.go:276] No container was found matching "kubernetes-dashboard"
I1109 10:17:20.888334 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1109 10:17:20.909202 25528 logs.go:274] 0 containers: []
W1109 10:17:20.909214 25528 logs.go:276] No container was found matching "storage-provisioner"
I1109 10:17:20.909298 25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1109 10:17:20.930795 25528 logs.go:274] 0 containers: []
W1109 10:17:20.930811 25528 logs.go:276] No container was found matching "kube-controller-manager"
I1109 10:17:20.930819 25528 logs.go:123] Gathering logs for dmesg ...
I1109 10:17:20.930826 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1109 10:17:20.943516 25528 logs.go:123] Gathering logs for describe nodes ...
I1109 10:17:20.943532 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1109 10:17:20.996952 25528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1109 10:17:20.996962 25528 logs.go:123] Gathering logs for Docker ...
I1109 10:17:20.996969 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1109 10:17:21.012476 25528 logs.go:123] Gathering logs for container status ...
I1109 10:17:21.012489 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1109 10:17:23.062097 25528 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049596062s)
I1109 10:17:23.062271 25528 logs.go:123] Gathering logs for kubelet ...
I1109 10:17:23.062281 25528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1109 10:17:23.100608 25528 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1109 10:17:23.100628 25528 out.go:239] *
*
W1109 10:17:23.100748 25528 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1109 10:17:23.100766 25528 out.go:239] *
*
W1109 10:17:23.101406 25528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1109 10:17:23.166357 25528 out.go:177]
W1109 10:17:23.209319 25528 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1109 18:15:24.596951 3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1109 18:15:25.817773 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1109 18:15:25.818787 3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1109 10:17:23.209492 25528 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1109 10:17:23.209559 25528 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1109 10:17:23.231281 25528 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.28s)