=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0127 19:41:09.155859 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:25.303768 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:44.655060 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.660542 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.672694 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.694890 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.737011 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.817794 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.979003 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:45.301221 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:45.941458 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:47.222138 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:49.782363 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:52.996726 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:54.902471 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:44:05.143485 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:44:25.625087 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:45:06.585189 4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.834888872s)
-- stdout --
* [ingress-addon-legacy-054000] minikube v1.28.0 on Darwin 13.2
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-054000 in cluster ingress-addon-legacy-054000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0127 19:40:53.682015 7492 out.go:296] Setting OutFile to fd 1 ...
I0127 19:40:53.682163 7492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0127 19:40:53.682168 7492 out.go:309] Setting ErrFile to fd 2...
I0127 19:40:53.682172 7492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0127 19:40:53.682288 7492 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
I0127 19:40:53.682861 7492 out.go:303] Setting JSON to false
I0127 19:40:53.701307 7492 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2427,"bootTime":1674874826,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0127 19:40:53.701384 7492 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0127 19:40:53.723540 7492 out.go:177] * [ingress-addon-legacy-054000] minikube v1.28.0 on Darwin 13.2
I0127 19:40:53.765870 7492 notify.go:220] Checking for updates...
I0127 19:40:53.787197 7492 out.go:177] - MINIKUBE_LOCATION=15565
I0127 19:40:53.829853 7492 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
I0127 19:40:53.851315 7492 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0127 19:40:53.873331 7492 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 19:40:53.895135 7492 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
I0127 19:40:53.917183 7492 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 19:40:53.939568 7492 driver.go:365] Setting default libvirt URI to qemu:///system
I0127 19:40:53.999883 7492 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0127 19:40:54.000099 7492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 19:40:54.146815 7492 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 03:40:54.051291725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 19:40:54.168156 7492 out.go:177] * Using the docker driver based on user configuration
I0127 19:40:54.189195 7492 start.go:296] selected driver: docker
I0127 19:40:54.189217 7492 start.go:840] validating driver "docker" against <nil>
I0127 19:40:54.189255 7492 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 19:40:54.193185 7492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 19:40:54.335924 7492 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 03:40:54.244118236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 19:40:54.336048 7492 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0127 19:40:54.336197 7492 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 19:40:54.358120 7492 out.go:177] * Using Docker Desktop driver with root privileges
I0127 19:40:54.379724 7492 cni.go:84] Creating CNI manager for ""
I0127 19:40:54.379759 7492 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0127 19:40:54.379778 7492 start_flags.go:319] config:
{Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0127 19:40:54.423850 7492 out.go:177] * Starting control plane node ingress-addon-legacy-054000 in cluster ingress-addon-legacy-054000
I0127 19:40:54.445442 7492 cache.go:120] Beginning downloading kic base image for docker with docker
I0127 19:40:54.466863 7492 out.go:177] * Pulling base image ...
I0127 19:40:54.509709 7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0127 19:40:54.509713 7492 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0127 19:40:54.560844 7492 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0127 19:40:54.560870 7492 cache.go:57] Caching tarball of preloaded images
I0127 19:40:54.561061 7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0127 19:40:54.582631 7492 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0127 19:40:54.625585 7492 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0127 19:40:54.614212 7492 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0127 19:40:54.625675 7492 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0127 19:40:54.711666 7492 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0127 19:40:57.219202 7492 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0127 19:40:57.219382 7492 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0127 19:40:57.841651 7492 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0127 19:40:57.841929 7492 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json ...
I0127 19:40:57.841954 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json: {Name:mk7e9386f9c8348577381a5d689e80c6463f62a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:40:57.842262 7492 cache.go:193] Successfully downloaded all kic artifacts
I0127 19:40:57.842288 7492 start.go:364] acquiring machines lock for ingress-addon-legacy-054000: {Name:mk028f3a902092b125e4b1d22762f6d6b2eef6d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 19:40:57.842416 7492 start.go:368] acquired machines lock for "ingress-addon-legacy-054000" in 120.893µs
I0127 19:40:57.842438 7492 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 19:40:57.842519 7492 start.go:125] createHost starting for "" (driver="docker")
I0127 19:40:57.868898 7492 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0127 19:40:57.869150 7492 start.go:159] libmachine.API.Create for "ingress-addon-legacy-054000" (driver="docker")
I0127 19:40:57.869200 7492 client.go:168] LocalClient.Create starting
I0127 19:40:57.869314 7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem
I0127 19:40:57.869358 7492 main.go:141] libmachine: Decoding PEM data...
I0127 19:40:57.869376 7492 main.go:141] libmachine: Parsing certificate...
I0127 19:40:57.869445 7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem
I0127 19:40:57.869495 7492 main.go:141] libmachine: Decoding PEM data...
I0127 19:40:57.869503 7492 main.go:141] libmachine: Parsing certificate...
I0127 19:40:57.890362 7492 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-054000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 19:40:57.950566 7492 cli_runner.go:211] docker network inspect ingress-addon-legacy-054000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 19:40:57.950688 7492 network_create.go:281] running [docker network inspect ingress-addon-legacy-054000] to gather additional debugging logs...
I0127 19:40:57.950708 7492 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-054000
W0127 19:40:58.005807 7492 cli_runner.go:211] docker network inspect ingress-addon-legacy-054000 returned with exit code 1
I0127 19:40:58.005838 7492 network_create.go:284] error running [docker network inspect ingress-addon-legacy-054000]: docker network inspect ingress-addon-legacy-054000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-054000
I0127 19:40:58.005857 7492 network_create.go:286] output of [docker network inspect ingress-addon-legacy-054000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-054000
** /stderr **
I0127 19:40:58.005957 7492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 19:40:58.063088 7492 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005aafe0}
I0127 19:40:58.063123 7492 network_create.go:123] attempt to create docker network ingress-addon-legacy-054000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0127 19:40:58.063198 7492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 ingress-addon-legacy-054000
I0127 19:40:58.149765 7492 network_create.go:107] docker network ingress-addon-legacy-054000 192.168.49.0/24 created
I0127 19:40:58.149803 7492 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-054000" container
I0127 19:40:58.149917 7492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0127 19:40:58.204354 7492 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-054000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --label created_by.minikube.sigs.k8s.io=true
I0127 19:40:58.259501 7492 oci.go:103] Successfully created a docker volume ingress-addon-legacy-054000
I0127 19:40:58.259626 7492 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-054000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --entrypoint /usr/bin/test -v ingress-addon-legacy-054000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
I0127 19:40:58.746342 7492 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-054000
I0127 19:40:58.746385 7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0127 19:40:58.746402 7492 kic.go:190] Starting extracting preloaded images to volume ...
I0127 19:40:58.746518 7492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-054000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
I0127 19:41:04.838894 7492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-054000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (6.092341935s)
I0127 19:41:04.838925 7492 kic.go:199] duration metric: took 6.092576 seconds to extract preloaded images to volume
I0127 19:41:04.839063 7492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0127 19:41:04.988435 7492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-054000 --name ingress-addon-legacy-054000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --network ingress-addon-legacy-054000 --ip 192.168.49.2 --volume ingress-addon-legacy-054000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
I0127 19:41:05.347876 7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Running}}
I0127 19:41:05.409529 7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
I0127 19:41:05.471687 7492 cli_runner.go:164] Run: docker exec ingress-addon-legacy-054000 stat /var/lib/dpkg/alternatives/iptables
I0127 19:41:05.582308 7492 oci.go:144] the created container "ingress-addon-legacy-054000" has a running status.
I0127 19:41:05.582348 7492 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa...
I0127 19:41:05.731509 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0127 19:41:05.731622 7492 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0127 19:41:05.834432 7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
I0127 19:41:05.895065 7492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0127 19:41:05.895084 7492 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-054000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0127 19:41:06.002676 7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
I0127 19:41:06.059952 7492 machine.go:88] provisioning docker machine ...
I0127 19:41:06.059994 7492 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-054000"
I0127 19:41:06.060119 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:06.118646 7492 main.go:141] libmachine: Using SSH client type: native
I0127 19:41:06.118858 7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50680 <nil> <nil>}
I0127 19:41:06.118875 7492 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-054000 && echo "ingress-addon-legacy-054000" | sudo tee /etc/hostname
I0127 19:41:06.263158 7492 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-054000
I0127 19:41:06.263254 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:06.323355 7492 main.go:141] libmachine: Using SSH client type: native
I0127 19:41:06.323524 7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50680 <nil> <nil>}
I0127 19:41:06.323543 7492 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-054000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-054000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-054000' | sudo tee -a /etc/hosts;
fi
fi
I0127 19:41:06.457662 7492 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 19:41:06.457683 7492 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
I0127 19:41:06.457699 7492 ubuntu.go:177] setting up certificates
I0127 19:41:06.457707 7492 provision.go:83] configureAuth start
I0127 19:41:06.457785 7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
I0127 19:41:06.515960 7492 provision.go:138] copyHostCerts
I0127 19:41:06.516009 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
I0127 19:41:06.516089 7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
I0127 19:41:06.516094 7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
I0127 19:41:06.516215 7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
I0127 19:41:06.516390 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
I0127 19:41:06.516421 7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
I0127 19:41:06.516426 7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
I0127 19:41:06.516489 7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
I0127 19:41:06.516631 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
I0127 19:41:06.516669 7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
I0127 19:41:06.516674 7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
I0127 19:41:06.516746 7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
I0127 19:41:06.516884 7492 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-054000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-054000]
I0127 19:41:06.558574 7492 provision.go:172] copyRemoteCerts
I0127 19:41:06.558631 7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 19:41:06.558684 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:06.617025 7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
I0127 19:41:06.714169 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0127 19:41:06.714257 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 19:41:06.731902 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem -> /etc/docker/server.pem
I0127 19:41:06.731993 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0127 19:41:06.749929 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0127 19:41:06.750006 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 19:41:06.767491 7492 provision.go:86] duration metric: configureAuth took 309.775211ms
I0127 19:41:06.767505 7492 ubuntu.go:193] setting minikube options for container-runtime
I0127 19:41:06.767655 7492 config.go:180] Loaded profile config "ingress-addon-legacy-054000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0127 19:41:06.767718 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:06.827430 7492 main.go:141] libmachine: Using SSH client type: native
I0127 19:41:06.827595 7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50680 <nil> <nil>}
I0127 19:41:06.827612 7492 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 19:41:06.962752 7492 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0127 19:41:06.962766 7492 ubuntu.go:71] root file system type: overlay
I0127 19:41:06.962951 7492 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 19:41:06.963036 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:07.021819 7492 main.go:141] libmachine: Using SSH client type: native
I0127 19:41:07.021984 7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50680 <nil> <nil>}
I0127 19:41:07.022038 7492 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 19:41:07.166728 7492 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0127 19:41:07.166834 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:07.224853 7492 main.go:141] libmachine: Using SSH client type: native
I0127 19:41:07.225021 7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50680 <nil> <nil>}
I0127 19:41:07.225034 7492 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 19:41:07.844740 7492 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-12-15 22:25:58.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-28 03:41:07.163682701 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0127 19:41:07.844764 7492 machine.go:91] provisioned docker machine in 1.784805821s
I0127 19:41:07.844770 7492 client.go:171] LocalClient.Create took 9.9756485s
I0127 19:41:07.844785 7492 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-054000" took 9.975719843s
I0127 19:41:07.844795 7492 start.go:300] post-start starting for "ingress-addon-legacy-054000" (driver="docker")
I0127 19:41:07.844800 7492 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 19:41:07.844937 7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 19:41:07.845046 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:07.904712 7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
I0127 19:41:08.000545 7492 ssh_runner.go:195] Run: cat /etc/os-release
I0127 19:41:08.004273 7492 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 19:41:08.004298 7492 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 19:41:08.004309 7492 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 19:41:08.004322 7492 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0127 19:41:08.004331 7492 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
I0127 19:41:08.004461 7492 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
I0127 19:41:08.004664 7492 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
I0127 19:41:08.004672 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> /etc/ssl/certs/44062.pem
I0127 19:41:08.004868 7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 19:41:08.012169 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
I0127 19:41:08.029868 7492 start.go:303] post-start completed in 185.066357ms
I0127 19:41:08.030502 7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
I0127 19:41:08.089479 7492 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json ...
I0127 19:41:08.089926 7492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 19:41:08.090001 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:08.149122 7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
I0127 19:41:08.240025 7492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0127 19:41:08.244765 7492 start.go:128] duration metric: createHost completed in 10.402322988s
I0127 19:41:08.244792 7492 start.go:83] releasing machines lock for "ingress-addon-legacy-054000", held for 10.402452729s
I0127 19:41:08.244915 7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
I0127 19:41:08.302874 7492 ssh_runner.go:195] Run: cat /version.json
I0127 19:41:08.302888 7492 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0127 19:41:08.302941 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:08.302948 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:08.364995 7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
I0127 19:41:08.365132 7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
I0127 19:41:08.663419 7492 ssh_runner.go:195] Run: systemctl --version
I0127 19:41:08.667991 7492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0127 19:41:08.672923 7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0127 19:41:08.693745 7492 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0127 19:41:08.712132 7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0127 19:41:08.729154 7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0127 19:41:08.736955 7492 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0127 19:41:08.736969 7492 start.go:472] detecting cgroup driver to use...
I0127 19:41:08.736982 7492 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0127 19:41:08.737080 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 19:41:08.750212 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0127 19:41:08.758795 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 19:41:08.767806 7492 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 19:41:08.767935 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 19:41:08.776518 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 19:41:08.784958 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 19:41:08.793359 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 19:41:08.802063 7492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 19:41:08.810120 7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 19:41:08.819024 7492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 19:41:08.826535 7492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 19:41:08.834053 7492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 19:41:08.901405 7492 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 19:41:08.977173 7492 start.go:472] detecting cgroup driver to use...
I0127 19:41:08.977192 7492 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0127 19:41:08.977293 7492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 19:41:08.988865 7492 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0127 19:41:08.988937 7492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 19:41:08.999690 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0127 19:41:09.013764 7492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0127 19:41:09.105328 7492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0127 19:41:09.207729 7492 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0127 19:41:09.207746 7492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0127 19:41:09.221311 7492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 19:41:09.316742 7492 ssh_runner.go:195] Run: sudo systemctl restart docker
I0127 19:41:09.528142 7492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 19:41:09.558937 7492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 19:41:09.635563 7492 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
I0127 19:41:09.635791 7492 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-054000 dig +short host.docker.internal
I0127 19:41:09.750290 7492 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0127 19:41:09.750406 7492 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0127 19:41:09.754775 7492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 19:41:09.764980 7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
I0127 19:41:09.826262 7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0127 19:41:09.826339 7492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 19:41:09.851861 7492 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0127 19:41:09.851879 7492 docker.go:560] Images already preloaded, skipping extraction
I0127 19:41:09.851951 7492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 19:41:09.876876 7492 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0127 19:41:09.876892 7492 cache_images.go:84] Images are preloaded, skipping loading
I0127 19:41:09.876985 7492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0127 19:41:09.949323 7492 cni.go:84] Creating CNI manager for ""
I0127 19:41:09.949344 7492 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0127 19:41:09.949368 7492 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0127 19:41:09.949386 7492 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-054000 NodeName:ingress-addon-legacy-054000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0127 19:41:09.949517 7492 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-054000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 19:41:09.949615 7492 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-054000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0127 19:41:09.949678 7492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0127 19:41:09.957765 7492 binaries.go:44] Found k8s binaries, skipping transfer
I0127 19:41:09.957826 7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 19:41:09.965385 7492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0127 19:41:09.978477 7492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0127 19:41:09.991478 7492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0127 19:41:10.004953 7492 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0127 19:41:10.008913 7492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 19:41:10.018778 7492 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000 for IP: 192.168.49.2
I0127 19:41:10.018795 7492 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.018972 7492 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
I0127 19:41:10.019048 7492 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
I0127 19:41:10.019090 7492 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key
I0127 19:41:10.019104 7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt with IP's: []
I0127 19:41:10.092741 7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt ...
I0127 19:41:10.092754 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt: {Name:mk5948de52246f31ea9dca617aa13d451663230d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.093064 7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key ...
I0127 19:41:10.093072 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key: {Name:mke380590e0a75d844fa50d6e66145fd00a430fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.093277 7492 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2
I0127 19:41:10.093292 7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0127 19:41:10.368101 7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 ...
I0127 19:41:10.368116 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2: {Name:mk6d23a3062e04b7b3cd2f8f1bee1444b4d77482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.368418 7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2 ...
I0127 19:41:10.368427 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2: {Name:mk64d8aacf9651324704a8002ebfd1ac8712a26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.368626 7492 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt
I0127 19:41:10.368804 7492 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key
I0127 19:41:10.368982 7492 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key
I0127 19:41:10.369002 7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt with IP's: []
I0127 19:41:10.716056 7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt ...
I0127 19:41:10.716070 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt: {Name:mkc6726d4fc8887e9eb49f726a06b0037ac71b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.716345 7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key ...
I0127 19:41:10.716352 7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key: {Name:mk2e0342ced452cfe62fa48c2ac5e81968858620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 19:41:10.716530 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0127 19:41:10.716560 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0127 19:41:10.716581 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0127 19:41:10.716604 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0127 19:41:10.716623 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0127 19:41:10.716645 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0127 19:41:10.716668 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0127 19:41:10.716688 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0127 19:41:10.716776 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
W0127 19:41:10.716834 7492 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
I0127 19:41:10.716846 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
I0127 19:41:10.716878 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
I0127 19:41:10.716919 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
I0127 19:41:10.716954 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
I0127 19:41:10.717031 7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
I0127 19:41:10.717060 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0127 19:41:10.717117 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem -> /usr/share/ca-certificates/4406.pem
I0127 19:41:10.717143 7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> /usr/share/ca-certificates/44062.pem
I0127 19:41:10.717675 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0127 19:41:10.737043 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 19:41:10.754701 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 19:41:10.772213 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 19:41:10.789744 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 19:41:10.807103 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 19:41:10.824981 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 19:41:10.842568 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 19:41:10.860393 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 19:41:10.878358 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
I0127 19:41:10.896157 7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
I0127 19:41:10.913683 7492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 19:41:10.926724 7492 ssh_runner.go:195] Run: openssl version
I0127 19:41:10.932400 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 19:41:10.941167 7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 19:41:10.945530 7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
I0127 19:41:10.945574 7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 19:41:10.951028 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 19:41:10.959467 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
I0127 19:41:10.967890 7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
I0127 19:41:10.971906 7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
I0127 19:41:10.971956 7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
I0127 19:41:10.977362 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
I0127 19:41:10.985577 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
I0127 19:41:10.993954 7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
I0127 19:41:10.998188 7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
I0127 19:41:10.998238 7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
I0127 19:41:11.003874 7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
I0127 19:41:11.012168 7492 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0127 19:41:11.012326 7492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0127 19:41:11.035833 7492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 19:41:11.044067 7492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 19:41:11.051807 7492 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0127 19:41:11.051881 7492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 19:41:11.059359 7492 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 19:41:11.059384 7492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 19:41:11.108390 7492 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0127 19:41:11.108431 7492 kubeadm.go:322] [preflight] Running pre-flight checks
I0127 19:41:11.412018 7492 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 19:41:11.412147 7492 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 19:41:11.412281 7492 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0127 19:41:11.638183 7492 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 19:41:11.638725 7492 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 19:41:11.638784 7492 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0127 19:41:11.712792 7492 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 19:41:11.756024 7492 out.go:204] - Generating certificates and keys ...
I0127 19:41:11.756169 7492 kubeadm.go:322] [certs] Using existing ca certificate authority
I0127 19:41:11.756268 7492 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0127 19:41:12.068637 7492 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0127 19:41:12.446081 7492 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0127 19:41:12.752873 7492 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0127 19:41:12.821444 7492 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0127 19:41:12.936662 7492 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0127 19:41:12.936798 7492 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0127 19:41:13.037964 7492 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0127 19:41:13.038152 7492 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0127 19:41:13.206406 7492 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0127 19:41:13.406723 7492 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0127 19:41:13.689580 7492 kubeadm.go:322] [certs] Generating "sa" key and public key
I0127 19:41:13.705664 7492 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 19:41:13.801781 7492 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 19:41:13.914710 7492 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 19:41:14.048146 7492 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 19:41:14.118969 7492 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 19:41:14.119519 7492 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 19:41:14.141118 7492 out.go:204] - Booting up control plane ...
I0127 19:41:14.141269 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 19:41:14.141368 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 19:41:14.141474 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 19:41:14.141564 7492 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 19:41:14.141748 7492 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0127 19:41:54.128042 7492 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0127 19:41:54.129019 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:41:54.129198 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:41:59.129939 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:41:59.130152 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:42:09.131888 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:42:09.132052 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:42:29.133507 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:42:29.133762 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:43:09.134531 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:43:09.134731 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:43:09.134743 7492 kubeadm.go:322]
I0127 19:43:09.134780 7492 kubeadm.go:322] Unfortunately, an error has occurred:
I0127 19:43:09.134858 7492 kubeadm.go:322] timed out waiting for the condition
I0127 19:43:09.134869 7492 kubeadm.go:322]
I0127 19:43:09.134904 7492 kubeadm.go:322] This error is likely caused by:
I0127 19:43:09.134933 7492 kubeadm.go:322] - The kubelet is not running
I0127 19:43:09.135070 7492 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0127 19:43:09.135085 7492 kubeadm.go:322]
I0127 19:43:09.135181 7492 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0127 19:43:09.135241 7492 kubeadm.go:322] - 'systemctl status kubelet'
I0127 19:43:09.135277 7492 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0127 19:43:09.135284 7492 kubeadm.go:322]
I0127 19:43:09.135372 7492 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0127 19:43:09.135465 7492 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0127 19:43:09.135478 7492 kubeadm.go:322]
I0127 19:43:09.135561 7492 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0127 19:43:09.135639 7492 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0127 19:43:09.135732 7492 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0127 19:43:09.135819 7492 kubeadm.go:322] - 'docker logs CONTAINERID'
I0127 19:43:09.135832 7492 kubeadm.go:322]
I0127 19:43:09.138817 7492 kubeadm.go:322] W0128 03:41:11.107740 1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0127 19:43:09.138968 7492 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0127 19:43:09.139043 7492 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0127 19:43:09.139161 7492 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
I0127 19:43:09.139267 7492 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 19:43:09.139370 7492 kubeadm.go:322] W0128 03:41:14.123584 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0127 19:43:09.139471 7492 kubeadm.go:322] W0128 03:41:14.125037 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0127 19:43:09.139542 7492 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0127 19:43:09.139601 7492 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0127 19:43:09.139811 7492 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:41:11.107740 1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:41:14.123584 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:41:14.125037 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:41:11.107740 1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:41:14.123584 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:41:14.125037 1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0127 19:43:09.139866 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0127 19:43:09.554550 7492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 19:43:09.564622 7492 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0127 19:43:09.564680 7492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 19:43:09.572049 7492 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 19:43:09.572080 7492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 19:43:09.620065 7492 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0127 19:43:09.620120 7492 kubeadm.go:322] [preflight] Running pre-flight checks
I0127 19:43:09.910530 7492 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 19:43:09.910627 7492 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 19:43:09.910721 7492 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0127 19:43:10.132063 7492 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 19:43:10.132992 7492 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 19:43:10.133041 7492 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0127 19:43:10.201986 7492 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 19:43:10.223677 7492 out.go:204] - Generating certificates and keys ...
I0127 19:43:10.223761 7492 kubeadm.go:322] [certs] Using existing ca certificate authority
I0127 19:43:10.223830 7492 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0127 19:43:10.223939 7492 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 19:43:10.224032 7492 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0127 19:43:10.224087 7492 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0127 19:43:10.224140 7492 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0127 19:43:10.224254 7492 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0127 19:43:10.224316 7492 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0127 19:43:10.224371 7492 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 19:43:10.224435 7492 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 19:43:10.224485 7492 kubeadm.go:322] [certs] Using the existing "sa" key
I0127 19:43:10.224538 7492 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 19:43:10.411376 7492 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 19:43:10.576138 7492 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 19:43:10.719313 7492 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 19:43:10.897559 7492 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 19:43:10.898227 7492 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 19:43:10.919749 7492 out.go:204] - Booting up control plane ...
I0127 19:43:10.919971 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 19:43:10.920172 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 19:43:10.920296 7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 19:43:10.920458 7492 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 19:43:10.920747 7492 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0127 19:43:50.907841 7492 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0127 19:43:50.908421 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:43:50.908667 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:43:55.909384 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:43:55.909602 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:44:05.910280 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:44:05.910438 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:44:25.911844 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:44:25.912056 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:45:05.913015 7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0127 19:45:05.913240 7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0127 19:45:05.913253 7492 kubeadm.go:322]
I0127 19:45:05.913291 7492 kubeadm.go:322] Unfortunately, an error has occurred:
I0127 19:45:05.913334 7492 kubeadm.go:322] timed out waiting for the condition
I0127 19:45:05.913343 7492 kubeadm.go:322]
I0127 19:45:05.913412 7492 kubeadm.go:322] This error is likely caused by:
I0127 19:45:05.913457 7492 kubeadm.go:322] - The kubelet is not running
I0127 19:45:05.913565 7492 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0127 19:45:05.913585 7492 kubeadm.go:322]
I0127 19:45:05.913725 7492 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0127 19:45:05.913770 7492 kubeadm.go:322] - 'systemctl status kubelet'
I0127 19:45:05.913820 7492 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0127 19:45:05.913830 7492 kubeadm.go:322]
I0127 19:45:05.913964 7492 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0127 19:45:05.914073 7492 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0127 19:45:05.914089 7492 kubeadm.go:322]
I0127 19:45:05.914199 7492 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0127 19:45:05.914311 7492 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0127 19:45:05.914375 7492 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0127 19:45:05.914411 7492 kubeadm.go:322] - 'docker logs CONTAINERID'
I0127 19:45:05.914419 7492 kubeadm.go:322]
I0127 19:45:05.917325 7492 kubeadm.go:322] W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0127 19:45:05.917474 7492 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0127 19:45:05.917530 7492 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0127 19:45:05.917639 7492 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
I0127 19:45:05.917733 7492 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 19:45:05.917825 7492 kubeadm.go:322] W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0127 19:45:05.917922 7492 kubeadm.go:322] W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0127 19:45:05.917989 7492 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0127 19:45:05.918053 7492 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0127 19:45:05.918089 7492 kubeadm.go:403] StartCluster complete in 3m54.907924679s
I0127 19:45:05.918183 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0127 19:45:05.941054 7492 logs.go:279] 0 containers: []
W0127 19:45:05.941068 7492 logs.go:281] No container was found matching "kube-apiserver"
I0127 19:45:05.941137 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0127 19:45:05.964528 7492 logs.go:279] 0 containers: []
W0127 19:45:05.964542 7492 logs.go:281] No container was found matching "etcd"
I0127 19:45:05.964612 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0127 19:45:05.987525 7492 logs.go:279] 0 containers: []
W0127 19:45:05.987540 7492 logs.go:281] No container was found matching "coredns"
I0127 19:45:05.987612 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0127 19:45:06.009743 7492 logs.go:279] 0 containers: []
W0127 19:45:06.009756 7492 logs.go:281] No container was found matching "kube-scheduler"
I0127 19:45:06.009833 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0127 19:45:06.031949 7492 logs.go:279] 0 containers: []
W0127 19:45:06.031963 7492 logs.go:281] No container was found matching "kube-proxy"
I0127 19:45:06.032033 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0127 19:45:06.054314 7492 logs.go:279] 0 containers: []
W0127 19:45:06.054327 7492 logs.go:281] No container was found matching "kubernetes-dashboard"
I0127 19:45:06.054402 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0127 19:45:06.077463 7492 logs.go:279] 0 containers: []
W0127 19:45:06.077478 7492 logs.go:281] No container was found matching "storage-provisioner"
I0127 19:45:06.077545 7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0127 19:45:06.099288 7492 logs.go:279] 0 containers: []
W0127 19:45:06.099304 7492 logs.go:281] No container was found matching "kube-controller-manager"
I0127 19:45:06.099317 7492 logs.go:124] Gathering logs for kubelet ...
I0127 19:45:06.099328 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0127 19:45:06.137292 7492 logs.go:124] Gathering logs for dmesg ...
I0127 19:45:06.137306 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0127 19:45:06.151028 7492 logs.go:124] Gathering logs for describe nodes ...
I0127 19:45:06.151045 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0127 19:45:06.208343 7492 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0127 19:45:06.208354 7492 logs.go:124] Gathering logs for Docker ...
I0127 19:45:06.208361 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0127 19:45:06.225388 7492 logs.go:124] Gathering logs for container status ...
I0127 19:45:06.225400 7492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0127 19:45:08.274626 7492 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049230709s)
W0127 19:45:08.274749 7492 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0127 19:45:08.274768 7492 out.go:239] *
*
W0127 19:45:08.274899 7492 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0127 19:45:08.274913 7492 out.go:239] *
*
W0127 19:45:08.275622 7492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 19:45:08.338161 7492 out.go:177]
W0127 19:45:08.380493 7492 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0128 03:43:09.619357 3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0128 03:43:10.902448 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0128 03:43:10.904092 3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0127 19:45:08.380663 7492 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0127 19:45:08.380754 7492 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0127 19:45:08.402086 7492 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.87s)