=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1101 15:55:03.191932 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:19.340655 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:44.170875 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.176386 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.188605 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.210880 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.251576 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.333777 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.494086 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:44.814170 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:45.455030 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:46.737349 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:47.029713 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/addons-154449/client.crt: no such file or directory
E1101 15:57:49.297483 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:57:54.417837 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
E1101 15:58:04.657867 3413 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/functional-154936/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.310729265s)
-- stdout --
* [ingress-addon-legacy-155406] minikube v1.27.1 on Darwin 13.0
- MINIKUBE_LOCATION=15232
- KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-155406 in cluster ingress-addon-legacy-155406
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I1101 15:54:06.919419 6026 out.go:296] Setting OutFile to fd 1 ...
I1101 15:54:06.919582 6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 15:54:06.919587 6026 out.go:309] Setting ErrFile to fd 2...
I1101 15:54:06.919591 6026 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 15:54:06.919698 6026 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15232-2108/.minikube/bin
I1101 15:54:06.920254 6026 out.go:303] Setting JSON to false
I1101 15:54:06.939076 6026 start.go:116] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1421,"bootTime":1667341825,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W1101 15:54:06.939223 6026 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1101 15:54:06.961780 6026 out.go:177] * [ingress-addon-legacy-155406] minikube v1.27.1 on Darwin 13.0
I1101 15:54:07.005783 6026 notify.go:220] Checking for updates...
I1101 15:54:07.027310 6026 out.go:177] - MINIKUBE_LOCATION=15232
I1101 15:54:07.048728 6026 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15232-2108/kubeconfig
I1101 15:54:07.070763 6026 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I1101 15:54:07.092573 6026 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 15:54:07.113710 6026 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15232-2108/.minikube
I1101 15:54:07.135830 6026 driver.go:365] Setting default libvirt URI to qemu:///system
I1101 15:54:07.197160 6026 docker.go:137] docker version: linux-20.10.20
I1101 15:54:07.197319 6026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 15:54:07.338629 6026 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-01 22:54:07.257675909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 15:54:07.382313 6026 out.go:177] * Using the docker driver based on user configuration
I1101 15:54:07.403410 6026 start.go:282] selected driver: docker
I1101 15:54:07.403437 6026 start.go:808] validating driver "docker" against <nil>
I1101 15:54:07.403467 6026 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 15:54:07.407285 6026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 15:54:07.549483 6026 info.go:266] docker info: {ID:HPSG:A4AE:7PJH:NBWO:ONHL:GSQ4:6VVP:PETP:L7TN:PZXT:AQQ7:NM5P Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-01 22:54:07.469000737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1101 15:54:07.549594 6026 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1101 15:54:07.549731 6026 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1101 15:54:07.571565 6026 out.go:177] * Using Docker Desktop driver with root privileges
I1101 15:54:07.593464 6026 cni.go:95] Creating CNI manager for ""
I1101 15:54:07.593496 6026 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1101 15:54:07.593512 6026 start_flags.go:317] config:
{Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 15:54:07.615176 6026 out.go:177] * Starting control plane node ingress-addon-legacy-155406 in cluster ingress-addon-legacy-155406
I1101 15:54:07.657474 6026 cache.go:120] Beginning downloading kic base image for docker with docker
I1101 15:54:07.679465 6026 out.go:177] * Pulling base image ...
I1101 15:54:07.722404 6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1101 15:54:07.722464 6026 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1101 15:54:07.777831 6026 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1101 15:54:07.777853 6026 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1101 15:54:07.806734 6026 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1101 15:54:07.806764 6026 cache.go:57] Caching tarball of preloaded images
I1101 15:54:07.807167 6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1101 15:54:07.851201 6026 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I1101 15:54:07.872430 6026 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1101 15:54:07.959227 6026 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1101 15:54:12.132975 6026 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1101 15:54:12.133241 6026 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1101 15:54:12.750863 6026 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I1101 15:54:12.751136 6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json ...
I1101 15:54:12.751166 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json: {Name:mkc057165dd22cb54ce9b6c28b65dd8e7b7e727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:12.751472 6026 cache.go:208] Successfully downloaded all kic artifacts
I1101 15:54:12.751497 6026 start.go:364] acquiring machines lock for ingress-addon-legacy-155406: {Name:mk2aef93171a1a7629f910f37708b3772b41b4c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1101 15:54:12.751628 6026 start.go:368] acquired machines lock for "ingress-addon-legacy-155406" in 124.139µs
I1101 15:54:12.751655 6026 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I1101 15:54:12.751738 6026 start.go:125] createHost starting for "" (driver="docker")
I1101 15:54:12.796760 6026 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1101 15:54:12.797000 6026 start.go:159] libmachine.API.Create for "ingress-addon-legacy-155406" (driver="docker")
I1101 15:54:12.797046 6026 client.go:168] LocalClient.Create starting
I1101 15:54:12.797163 6026 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem
I1101 15:54:12.797250 6026 main.go:134] libmachine: Decoding PEM data...
I1101 15:54:12.797266 6026 main.go:134] libmachine: Parsing certificate...
I1101 15:54:12.797319 6026 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem
I1101 15:54:12.797386 6026 main.go:134] libmachine: Decoding PEM data...
I1101 15:54:12.797396 6026 main.go:134] libmachine: Parsing certificate...
I1101 15:54:12.797981 6026 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-155406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 15:54:12.857239 6026 cli_runner.go:211] docker network inspect ingress-addon-legacy-155406 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 15:54:12.857364 6026 network_create.go:272] running [docker network inspect ingress-addon-legacy-155406] to gather additional debugging logs...
I1101 15:54:12.857390 6026 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-155406
W1101 15:54:12.914199 6026 cli_runner.go:211] docker network inspect ingress-addon-legacy-155406 returned with exit code 1
I1101 15:54:12.914224 6026 network_create.go:275] error running [docker network inspect ingress-addon-legacy-155406]: docker network inspect ingress-addon-legacy-155406: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-155406
I1101 15:54:12.914248 6026 network_create.go:277] output of [docker network inspect ingress-addon-legacy-155406]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-155406
** /stderr **
I1101 15:54:12.914352 6026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 15:54:12.969924 6026 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b104c0] misses:0}
I1101 15:54:12.969963 6026 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1101 15:54:12.969975 6026 network_create.go:115] attempt to create docker network ingress-addon-legacy-155406 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1101 15:54:12.970072 6026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 ingress-addon-legacy-155406
I1101 15:54:13.057836 6026 network_create.go:99] docker network ingress-addon-legacy-155406 192.168.49.0/24 created
I1101 15:54:13.057881 6026 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-155406" container
I1101 15:54:13.058016 6026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1101 15:54:13.114962 6026 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-155406 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --label created_by.minikube.sigs.k8s.io=true
I1101 15:54:13.171685 6026 oci.go:103] Successfully created a docker volume ingress-addon-legacy-155406
I1101 15:54:13.171821 6026 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-155406-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --entrypoint /usr/bin/test -v ingress-addon-legacy-155406:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1101 15:54:13.771009 6026 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-155406
I1101 15:54:13.771059 6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1101 15:54:13.771073 6026 kic.go:179] Starting extracting preloaded images to volume ...
I1101 15:54:13.771201 6026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-155406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1101 15:54:18.843420 6026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15232-2108/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-155406:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (5.072248804s)
I1101 15:54:18.843442 6026 kic.go:188] duration metric: took 5.072492 seconds to extract preloaded images to volume
I1101 15:54:18.843564 6026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1101 15:54:18.986566 6026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-155406 --name ingress-addon-legacy-155406 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-155406 --network ingress-addon-legacy-155406 --ip 192.168.49.2 --volume ingress-addon-legacy-155406:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1101 15:54:19.340404 6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Running}}
I1101 15:54:19.401162 6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
I1101 15:54:19.463125 6026 cli_runner.go:164] Run: docker exec ingress-addon-legacy-155406 stat /var/lib/dpkg/alternatives/iptables
I1101 15:54:19.585807 6026 oci.go:144] the created container "ingress-addon-legacy-155406" has a running status.
I1101 15:54:19.585841 6026 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa...
I1101 15:54:19.787116 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1101 15:54:19.787204 6026 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1101 15:54:19.894883 6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
I1101 15:54:19.953285 6026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1101 15:54:19.953311 6026 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-155406 chown docker:docker /home/docker/.ssh/authorized_keys]
I1101 15:54:20.060715 6026 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-155406 --format={{.State.Status}}
I1101 15:54:20.117785 6026 machine.go:88] provisioning docker machine ...
I1101 15:54:20.117831 6026 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-155406"
I1101 15:54:20.117944 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:20.177198 6026 main.go:134] libmachine: Using SSH client type: native
I1101 15:54:20.177401 6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50503 <nil> <nil>}
I1101 15:54:20.177417 6026 main.go:134] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-155406 && echo "ingress-addon-legacy-155406" | sudo tee /etc/hostname
I1101 15:54:20.304005 6026 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-155406
I1101 15:54:20.304129 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:20.364870 6026 main.go:134] libmachine: Using SSH client type: native
I1101 15:54:20.365043 6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50503 <nil> <nil>}
I1101 15:54:20.365064 6026 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-155406' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-155406/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-155406' | sudo tee -a /etc/hosts;
fi
fi
I1101 15:54:20.482437 6026 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1101 15:54:20.482455 6026 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15232-2108/.minikube CaCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15232-2108/.minikube}
I1101 15:54:20.482480 6026 ubuntu.go:177] setting up certificates
I1101 15:54:20.482488 6026 provision.go:83] configureAuth start
I1101 15:54:20.482575 6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
I1101 15:54:20.539273 6026 provision.go:138] copyHostCerts
I1101 15:54:20.539317 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
I1101 15:54:20.539385 6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem, removing ...
I1101 15:54:20.539392 6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem
I1101 15:54:20.539513 6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/key.pem (1675 bytes)
I1101 15:54:20.539705 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
I1101 15:54:20.539742 6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem, removing ...
I1101 15:54:20.539748 6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem
I1101 15:54:20.539815 6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.pem (1082 bytes)
I1101 15:54:20.539940 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
I1101 15:54:20.539973 6026 exec_runner.go:144] found /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem, removing ...
I1101 15:54:20.539978 6026 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem
I1101 15:54:20.540044 6026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15232-2108/.minikube/cert.pem (1123 bytes)
I1101 15:54:20.540192 6026 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-155406 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-155406]
I1101 15:54:20.824844 6026 provision.go:172] copyRemoteCerts
I1101 15:54:20.824907 6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1101 15:54:20.824969 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:20.884535 6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
I1101 15:54:20.973548 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1101 15:54:20.973640 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1101 15:54:20.990289 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1101 15:54:20.990365 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1101 15:54:21.008145 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem -> /etc/docker/server.pem
I1101 15:54:21.008222 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I1101 15:54:21.025338 6026 provision.go:86] duration metric: configureAuth took 542.849779ms
I1101 15:54:21.025350 6026 ubuntu.go:193] setting minikube options for container-runtime
I1101 15:54:21.025529 6026 config.go:180] Loaded profile config "ingress-addon-legacy-155406": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I1101 15:54:21.025617 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:21.082892 6026 main.go:134] libmachine: Using SSH client type: native
I1101 15:54:21.083050 6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50503 <nil> <nil>}
I1101 15:54:21.083068 6026 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1101 15:54:21.202666 6026 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1101 15:54:21.202684 6026 ubuntu.go:71] root file system type: overlay
I1101 15:54:21.202847 6026 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1101 15:54:21.202950 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:21.262257 6026 main.go:134] libmachine: Using SSH client type: native
I1101 15:54:21.262421 6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50503 <nil> <nil>}
I1101 15:54:21.262475 6026 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1101 15:54:21.388424 6026 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1101 15:54:21.388552 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:21.445284 6026 main.go:134] libmachine: Using SSH client type: native
I1101 15:54:21.445457 6026 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50503 <nil> <nil>}
I1101 15:54:21.445470 6026 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1101 15:54:22.031631 6026 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-01 22:54:21.406669928 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1101 15:54:22.031651 6026 machine.go:91] provisioned docker machine in 1.913893379s
I1101 15:54:22.031707 6026 client.go:171] LocalClient.Create took 9.234851349s
I1101 15:54:22.031730 6026 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-155406" took 9.234953934s
I1101 15:54:22.031758 6026 start.go:300] post-start starting for "ingress-addon-legacy-155406" (driver="docker")
I1101 15:54:22.031788 6026 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1101 15:54:22.031907 6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1101 15:54:22.032020 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:22.091848 6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
I1101 15:54:22.183883 6026 ssh_runner.go:195] Run: cat /etc/os-release
I1101 15:54:22.187639 6026 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1101 15:54:22.187656 6026 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1101 15:54:22.187663 6026 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1101 15:54:22.187669 6026 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1101 15:54:22.187679 6026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/addons for local assets ...
I1101 15:54:22.187777 6026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15232-2108/.minikube/files for local assets ...
I1101 15:54:22.187975 6026 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> 34132.pem in /etc/ssl/certs
I1101 15:54:22.187981 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> /etc/ssl/certs/34132.pem
I1101 15:54:22.188212 6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1101 15:54:22.195033 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /etc/ssl/certs/34132.pem (1708 bytes)
I1101 15:54:22.211869 6026 start.go:303] post-start completed in 180.082611ms
I1101 15:54:22.212468 6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
I1101 15:54:22.269896 6026 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/config.json ...
I1101 15:54:22.270336 6026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1101 15:54:22.270429 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:22.328666 6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
I1101 15:54:22.413168 6026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1101 15:54:22.417736 6026 start.go:128] duration metric: createHost completed in 9.666221377s
I1101 15:54:22.417754 6026 start.go:83] releasing machines lock for "ingress-addon-legacy-155406", held for 9.666352131s
I1101 15:54:22.417857 6026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-155406
I1101 15:54:22.476642 6026 ssh_runner.go:195] Run: systemctl --version
I1101 15:54:22.476659 6026 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1101 15:54:22.476733 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:22.476739 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:22.541662 6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
I1101 15:54:22.541647 6026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50503 SSHKeyPath:/Users/jenkins/minikube-integration/15232-2108/.minikube/machines/ingress-addon-legacy-155406/id_rsa Username:docker}
I1101 15:54:22.887654 6026 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1101 15:54:22.898191 6026 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1101 15:54:22.898254 6026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1101 15:54:22.907139 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1101 15:54:22.919943 6026 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1101 15:54:22.995665 6026 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1101 15:54:23.062239 6026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1101 15:54:23.126468 6026 ssh_runner.go:195] Run: sudo systemctl restart docker
I1101 15:54:23.328830 6026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 15:54:23.357001 6026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1101 15:54:23.410650 6026 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
I1101 15:54:23.410835 6026 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-155406 dig +short host.docker.internal
I1101 15:54:23.528557 6026 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I1101 15:54:23.528677 6026 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I1101 15:54:23.533157 6026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 15:54:23.543191 6026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-155406
I1101 15:54:23.601432 6026 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1101 15:54:23.601514 6026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 15:54:23.625463 6026 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1101 15:54:23.625512 6026 docker.go:543] Images already preloaded, skipping extraction
I1101 15:54:23.625640 6026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1101 15:54:23.649285 6026 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1101 15:54:23.649307 6026 cache_images.go:84] Images are preloaded, skipping loading
I1101 15:54:23.649399 6026 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1101 15:54:23.714872 6026 cni.go:95] Creating CNI manager for ""
I1101 15:54:23.714887 6026 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1101 15:54:23.714900 6026 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1101 15:54:23.714926 6026 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-155406 NodeName:ingress-addon-legacy-155406 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1101 15:54:23.715041 6026 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-155406"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1101 15:54:23.715125 6026 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-155406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1101 15:54:23.715199 6026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I1101 15:54:23.722737 6026 binaries.go:44] Found k8s binaries, skipping transfer
I1101 15:54:23.722813 6026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1101 15:54:23.729830 6026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I1101 15:54:23.742386 6026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I1101 15:54:23.754895 6026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I1101 15:54:23.769689 6026 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1101 15:54:23.773386 6026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1101 15:54:23.782543 6026 certs.go:54] Setting up /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406 for IP: 192.168.49.2
I1101 15:54:23.782712 6026 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key
I1101 15:54:23.782846 6026 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key
I1101 15:54:23.782935 6026 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key
I1101 15:54:23.782991 6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt with IP's: []
I1101 15:54:23.900664 6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt ...
I1101 15:54:23.900678 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.crt: {Name:mkdb4aa1fb2a3c4956f9cfe604c0e6ab8b485639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:23.900989 6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key ...
I1101 15:54:23.900997 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/client.key: {Name:mka4d335a3ca4cb9187a8ce6c14e2b88f7f8f4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:23.901231 6026 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2
I1101 15:54:23.901253 6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1101 15:54:24.003608 6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 ...
I1101 15:54:24.003618 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2: {Name:mk14ccf3269d74b2d967c3f64898b38556e93b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:24.003864 6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2 ...
I1101 15:54:24.003872 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2: {Name:mka869dae2da2e9cb17bc526aec28ae2f2248554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:24.004069 6026 certs.go:320] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt
I1101 15:54:24.004239 6026 certs.go:324] copying /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key
I1101 15:54:24.004403 6026 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key
I1101 15:54:24.004422 6026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt with IP's: []
I1101 15:54:24.147404 6026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt ...
I1101 15:54:24.147413 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt: {Name:mk943ba14cc85b79d0c50b2da8c14438e6db01a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:24.147661 6026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key ...
I1101 15:54:24.147669 6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key: {Name:mk013fa177ec7ed4e53db51ab8122c7d9611f8b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1101 15:54:24.147865 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1101 15:54:24.147898 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1101 15:54:24.147921 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1101 15:54:24.147951 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1101 15:54:24.147974 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1101 15:54:24.147996 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1101 15:54:24.148015 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1101 15:54:24.148035 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1101 15:54:24.148133 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem (1338 bytes)
W1101 15:54:24.148180 6026 certs.go:384] ignoring /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413_empty.pem, impossibly tiny 0 bytes
I1101 15:54:24.148191 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca-key.pem (1679 bytes)
I1101 15:54:24.148232 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/ca.pem (1082 bytes)
I1101 15:54:24.148267 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/cert.pem (1123 bytes)
I1101 15:54:24.148299 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/certs/key.pem (1675 bytes)
I1101 15:54:24.148385 6026 certs.go:388] found cert: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem (1708 bytes)
I1101 15:54:24.148429 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem -> /usr/share/ca-certificates/34132.pem
I1101 15:54:24.148457 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1101 15:54:24.148479 6026 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem -> /usr/share/ca-certificates/3413.pem
I1101 15:54:24.148977 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1101 15:54:24.167867 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1101 15:54:24.185388 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1101 15:54:24.202148 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/profiles/ingress-addon-legacy-155406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1101 15:54:24.218808 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1101 15:54:24.235809 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1101 15:54:24.252894 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1101 15:54:24.270077 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1101 15:54:24.286748 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/files/etc/ssl/certs/34132.pem --> /usr/share/ca-certificates/34132.pem (1708 bytes)
I1101 15:54:24.304011 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1101 15:54:24.321160 6026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15232-2108/.minikube/certs/3413.pem --> /usr/share/ca-certificates/3413.pem (1338 bytes)
I1101 15:54:24.338256 6026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1101 15:54:24.351530 6026 ssh_runner.go:195] Run: openssl version
I1101 15:54:24.356758 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/34132.pem && ln -fs /usr/share/ca-certificates/34132.pem /etc/ssl/certs/34132.pem"
I1101 15:54:24.364532 6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/34132.pem
I1101 15:54:24.368317 6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 1 22:49 /usr/share/ca-certificates/34132.pem
I1101 15:54:24.368363 6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/34132.pem
I1101 15:54:24.373484 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/34132.pem /etc/ssl/certs/3ec20f2e.0"
I1101 15:54:24.381285 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1101 15:54:24.388955 6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1101 15:54:24.392981 6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 1 22:45 /usr/share/ca-certificates/minikubeCA.pem
I1101 15:54:24.393036 6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1101 15:54:24.398196 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1101 15:54:24.405953 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3413.pem && ln -fs /usr/share/ca-certificates/3413.pem /etc/ssl/certs/3413.pem"
I1101 15:54:24.413804 6026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3413.pem
I1101 15:54:24.417827 6026 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 1 22:49 /usr/share/ca-certificates/3413.pem
I1101 15:54:24.417877 6026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3413.pem
I1101 15:54:24.422844 6026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3413.pem /etc/ssl/certs/51391683.0"
I1101 15:54:24.430746 6026 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-155406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-155406 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1101 15:54:24.430866 6026 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1101 15:54:24.452825 6026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1101 15:54:24.461177 6026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1101 15:54:24.468304 6026 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 15:54:24.468365 6026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 15:54:24.475716 6026 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 15:54:24.475741 6026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 15:54:24.522040 6026 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1101 15:54:24.522120 6026 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 15:54:24.807115 6026 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 15:54:24.807185 6026 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 15:54:24.807280 6026 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1101 15:54:25.023212 6026 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 15:54:25.023878 6026 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 15:54:25.023912 6026 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1101 15:54:25.093104 6026 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 15:54:25.137486 6026 out.go:204] - Generating certificates and keys ...
I1101 15:54:25.137586 6026 kubeadm.go:317] [certs] Using existing ca certificate authority
I1101 15:54:25.137651 6026 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1101 15:54:25.240429 6026 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1101 15:54:25.334957 6026 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1101 15:54:25.460358 6026 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1101 15:54:25.583179 6026 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1101 15:54:25.876619 6026 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1101 15:54:25.876814 6026 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1101 15:54:26.053675 6026 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1101 15:54:26.053825 6026 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1101 15:54:26.278578 6026 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1101 15:54:26.429244 6026 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1101 15:54:26.645872 6026 kubeadm.go:317] [certs] Generating "sa" key and public key
I1101 15:54:26.646072 6026 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 15:54:26.765771 6026 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 15:54:26.885957 6026 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 15:54:27.079505 6026 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 15:54:27.198860 6026 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 15:54:27.199796 6026 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 15:54:27.221390 6026 out.go:204] - Booting up control plane ...
I1101 15:54:27.221610 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 15:54:27.221769 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 15:54:27.221926 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 15:54:27.222089 6026 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 15:54:27.222365 6026 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1101 15:55:07.183037 6026 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1101 15:55:07.183514 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:55:07.183745 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:55:12.180938 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:55:12.181144 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:55:22.175167 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:55:22.175372 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:55:42.161458 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:55:42.161693 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:56:22.133310 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:56:22.133633 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:56:22.133655 6026 kubeadm.go:317]
I1101 15:56:22.133714 6026 kubeadm.go:317] Unfortunately, an error has occurred:
I1101 15:56:22.133804 6026 kubeadm.go:317] timed out waiting for the condition
I1101 15:56:22.133820 6026 kubeadm.go:317]
I1101 15:56:22.133899 6026 kubeadm.go:317] This error is likely caused by:
I1101 15:56:22.133958 6026 kubeadm.go:317] - The kubelet is not running
I1101 15:56:22.134093 6026 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1101 15:56:22.134108 6026 kubeadm.go:317]
I1101 15:56:22.134230 6026 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1101 15:56:22.134298 6026 kubeadm.go:317] - 'systemctl status kubelet'
I1101 15:56:22.134345 6026 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1101 15:56:22.134349 6026 kubeadm.go:317]
I1101 15:56:22.134449 6026 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1101 15:56:22.134512 6026 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1101 15:56:22.134517 6026 kubeadm.go:317]
I1101 15:56:22.134577 6026 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1101 15:56:22.134620 6026 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1101 15:56:22.134688 6026 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1101 15:56:22.134716 6026 kubeadm.go:317] - 'docker logs CONTAINERID'
I1101 15:56:22.134726 6026 kubeadm.go:317]
I1101 15:56:22.137005 6026 kubeadm.go:317] W1101 22:54:24.522045 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1101 15:56:22.137069 6026 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1101 15:56:22.137173 6026 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1101 15:56:22.137259 6026 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 15:56:22.137377 6026 kubeadm.go:317] W1101 22:54:27.208088 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1101 15:56:22.137481 6026 kubeadm.go:317] W1101 22:54:27.209373 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1101 15:56:22.137553 6026 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1101 15:56:22.137613 6026 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1101 15:56:22.137793 6026 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:54:24.522045 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:54:27.208088 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:54:27.209373 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-155406 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:54:24.522045 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:54:27.208088 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:54:27.209373 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I1101 15:56:22.137824 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1101 15:56:22.554601 6026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1101 15:56:22.564217 6026 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1101 15:56:22.564294 6026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1101 15:56:22.571953 6026 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1101 15:56:22.571976 6026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1101 15:56:22.618426 6026 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1101 15:56:22.618481 6026 kubeadm.go:317] [preflight] Running pre-flight checks
I1101 15:56:22.904594 6026 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1101 15:56:22.904700 6026 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1101 15:56:22.904783 6026 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1101 15:56:23.117989 6026 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1101 15:56:23.118668 6026 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1101 15:56:23.118716 6026 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1101 15:56:23.187566 6026 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1101 15:56:23.209012 6026 out.go:204] - Generating certificates and keys ...
I1101 15:56:23.209131 6026 kubeadm.go:317] [certs] Using existing ca certificate authority
I1101 15:56:23.209189 6026 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1101 15:56:23.209260 6026 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1101 15:56:23.209373 6026 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I1101 15:56:23.209497 6026 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I1101 15:56:23.209635 6026 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I1101 15:56:23.209691 6026 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I1101 15:56:23.209751 6026 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I1101 15:56:23.209822 6026 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1101 15:56:23.209882 6026 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1101 15:56:23.209916 6026 kubeadm.go:317] [certs] Using the existing "sa" key
I1101 15:56:23.209960 6026 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1101 15:56:23.275773 6026 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1101 15:56:23.417891 6026 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1101 15:56:23.473870 6026 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1101 15:56:23.642600 6026 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1101 15:56:23.643043 6026 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1101 15:56:23.664663 6026 out.go:204] - Booting up control plane ...
I1101 15:56:23.664793 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1101 15:56:23.664933 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1101 15:56:23.665042 6026 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1101 15:56:23.665189 6026 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1101 15:56:23.665468 6026 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1101 15:57:03.625112 6026 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1101 15:57:03.626171 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:57:03.626353 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:57:08.624187 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:57:08.624418 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:57:18.616693 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:57:18.616863 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:57:38.603348 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:57:38.603495 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:58:18.573997 6026 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1101 15:58:18.574164 6026 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1101 15:58:18.574176 6026 kubeadm.go:317]
I1101 15:58:18.574208 6026 kubeadm.go:317] Unfortunately, an error has occurred:
I1101 15:58:18.574240 6026 kubeadm.go:317] timed out waiting for the condition
I1101 15:58:18.574244 6026 kubeadm.go:317]
I1101 15:58:18.574273 6026 kubeadm.go:317] This error is likely caused by:
I1101 15:58:18.574299 6026 kubeadm.go:317] - The kubelet is not running
I1101 15:58:18.574412 6026 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1101 15:58:18.574423 6026 kubeadm.go:317]
I1101 15:58:18.574502 6026 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1101 15:58:18.574532 6026 kubeadm.go:317] - 'systemctl status kubelet'
I1101 15:58:18.574562 6026 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1101 15:58:18.574575 6026 kubeadm.go:317]
I1101 15:58:18.574667 6026 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1101 15:58:18.574737 6026 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1101 15:58:18.574749 6026 kubeadm.go:317]
I1101 15:58:18.574835 6026 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1101 15:58:18.574878 6026 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1101 15:58:18.574943 6026 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1101 15:58:18.574973 6026 kubeadm.go:317] - 'docker logs CONTAINERID'
I1101 15:58:18.574986 6026 kubeadm.go:317]
I1101 15:58:18.577301 6026 kubeadm.go:317] W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1101 15:58:18.577373 6026 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1101 15:58:18.577486 6026 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1101 15:58:18.577597 6026 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1101 15:58:18.577727 6026 kubeadm.go:317] W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1101 15:58:18.577835 6026 kubeadm.go:317] W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1101 15:58:18.577903 6026 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1101 15:58:18.577956 6026 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1101 15:58:18.577986 6026 kubeadm.go:398] StartCluster complete in 3m54.152918724s
I1101 15:58:18.578086 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1101 15:58:18.601189 6026 logs.go:274] 0 containers: []
W1101 15:58:18.601201 6026 logs.go:276] No container was found matching "kube-apiserver"
I1101 15:58:18.601283 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1101 15:58:18.624636 6026 logs.go:274] 0 containers: []
W1101 15:58:18.624648 6026 logs.go:276] No container was found matching "etcd"
I1101 15:58:18.624736 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1101 15:58:18.647359 6026 logs.go:274] 0 containers: []
W1101 15:58:18.647371 6026 logs.go:276] No container was found matching "coredns"
I1101 15:58:18.647451 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1101 15:58:18.669427 6026 logs.go:274] 0 containers: []
W1101 15:58:18.669438 6026 logs.go:276] No container was found matching "kube-scheduler"
I1101 15:58:18.669523 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1101 15:58:18.691652 6026 logs.go:274] 0 containers: []
W1101 15:58:18.691663 6026 logs.go:276] No container was found matching "kube-proxy"
I1101 15:58:18.691746 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1101 15:58:18.713432 6026 logs.go:274] 0 containers: []
W1101 15:58:18.713444 6026 logs.go:276] No container was found matching "kubernetes-dashboard"
I1101 15:58:18.713527 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1101 15:58:18.735477 6026 logs.go:274] 0 containers: []
W1101 15:58:18.735491 6026 logs.go:276] No container was found matching "storage-provisioner"
I1101 15:58:18.735576 6026 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1101 15:58:18.757691 6026 logs.go:274] 0 containers: []
W1101 15:58:18.757703 6026 logs.go:276] No container was found matching "kube-controller-manager"
I1101 15:58:18.757710 6026 logs.go:123] Gathering logs for container status ...
I1101 15:58:18.757717 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1101 15:58:20.804664 6026 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046983442s)
I1101 15:58:20.804845 6026 logs.go:123] Gathering logs for kubelet ...
I1101 15:58:20.804855 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1101 15:58:20.846199 6026 logs.go:123] Gathering logs for dmesg ...
I1101 15:58:20.846221 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1101 15:58:20.860840 6026 logs.go:123] Gathering logs for describe nodes ...
I1101 15:58:20.860853 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1101 15:58:20.915013 6026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1101 15:58:20.915028 6026 logs.go:123] Gathering logs for Docker ...
I1101 15:58:20.915035 6026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W1101 15:58:20.930302 6026 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1101 15:58:20.930323 6026 out.go:239] *
*
W1101 15:58:20.930468 6026 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1101 15:58:20.930484 6026 out.go:239] *
*
W1101 15:58:20.931166 6026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1101 15:58:20.995955 6026 out.go:177]
W1101 15:58:21.062246 6026 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1101 22:56:22.639514 3459 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1101 22:56:23.649774 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1101 22:56:23.650559 3459 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1101 15:58:21.062401 6026 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1101 15:58:21.062535 6026 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1101 15:58:21.104926 6026 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-155406 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.34s)