=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0124 09:41:50.942444 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:42:18.630066 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/addons-709000/client.crt: no such file or directory
E0124 09:42:45.076410 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.081898 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.093148 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.115245 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.157261 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.237964 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.398310 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:45.720519 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:46.361400 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:47.642116 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:50.203604 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:42:55.323997 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:43:05.565284 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:43:26.045287 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
E0124 09:44:07.006856 4355 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/functional-997000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.026979869s)
-- stdout --
* [ingress-addon-legacy-211000] minikube v1.28.0 on Darwin 13.1
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-211000 in cluster ingress-addon-legacy-211000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0124 09:40:07.589112 7953 out.go:296] Setting OutFile to fd 1 ...
I0124 09:40:07.589259 7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 09:40:07.589265 7953 out.go:309] Setting ErrFile to fd 2...
I0124 09:40:07.589269 7953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 09:40:07.589402 7953 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3057/.minikube/bin
I0124 09:40:07.589979 7953 out.go:303] Setting JSON to false
I0124 09:40:07.608173 7953 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2382,"bootTime":1674579625,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0124 09:40:07.608273 7953 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0124 09:40:07.630675 7953 out.go:177] * [ingress-addon-legacy-211000] minikube v1.28.0 on Darwin 13.1
I0124 09:40:07.652231 7953 notify.go:220] Checking for updates...
I0124 09:40:07.652265 7953 out.go:177] - MINIKUBE_LOCATION=15565
I0124 09:40:07.674300 7953 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3057/kubeconfig
I0124 09:40:07.696093 7953 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0124 09:40:07.717355 7953 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0124 09:40:07.739214 7953 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3057/.minikube
I0124 09:40:07.760409 7953 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0124 09:40:07.782477 7953 driver.go:365] Setting default libvirt URI to qemu:///system
I0124 09:40:07.842779 7953 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0124 09:40:07.842933 7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 09:40:07.984678 7953 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-24 17:40:07.89206901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 09:40:08.028340 7953 out.go:177] * Using the docker driver based on user configuration
I0124 09:40:08.049494 7953 start.go:296] selected driver: docker
I0124 09:40:08.049516 7953 start.go:840] validating driver "docker" against <nil>
I0124 09:40:08.049532 7953 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0124 09:40:08.053351 7953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 09:40:08.194114 7953 info.go:266] docker info: {ID:CJUQ:42TS:YYE6:SCGN:MW5S:IHFM:RBEN:XGTW:DFJE:IUNN:DVU7:AORP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-24 17:40:08.103734687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 09:40:08.194247 7953 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0124 09:40:08.194389 7953 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0124 09:40:08.216222 7953 out.go:177] * Using Docker Desktop driver with root privileges
I0124 09:40:08.237975 7953 cni.go:84] Creating CNI manager for ""
I0124 09:40:08.238031 7953 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0124 09:40:08.238052 7953 start_flags.go:319] config:
{Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 09:40:08.280885 7953 out.go:177] * Starting control plane node ingress-addon-legacy-211000 in cluster ingress-addon-legacy-211000
I0124 09:40:08.302009 7953 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 09:40:08.323864 7953 out.go:177] * Pulling base image ...
I0124 09:40:08.365964 7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0124 09:40:08.365969 7953 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 09:40:08.421034 7953 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 09:40:08.421057 7953 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 09:40:08.438202 7953 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0124 09:40:08.438241 7953 cache.go:57] Caching tarball of preloaded images
I0124 09:40:08.438631 7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0124 09:40:08.459891 7953 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0124 09:40:08.501922 7953 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0124 09:40:08.580811 7953 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0124 09:40:11.000938 7953 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0124 09:40:11.001138 7953 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0124 09:40:11.620772 7953 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0124 09:40:11.621029 7953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json ...
I0124 09:40:11.621055 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json: {Name:mk608cc88daffca7698a234960f4d9ea5c3d5378 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:11.621385 7953 cache.go:193] Successfully downloaded all kic artifacts
I0124 09:40:11.621411 7953 start.go:364] acquiring machines lock for ingress-addon-legacy-211000: {Name:mkaa30950e8aec33011c28dbd6cc20c941a3c9b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 09:40:11.621531 7953 start.go:368] acquired machines lock for "ingress-addon-legacy-211000" in 113.671µs
I0124 09:40:11.621553 7953 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0124 09:40:11.621661 7953 start.go:125] createHost starting for "" (driver="docker")
I0124 09:40:11.665684 7953 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0124 09:40:11.666048 7953 start.go:159] libmachine.API.Create for "ingress-addon-legacy-211000" (driver="docker")
I0124 09:40:11.666090 7953 client.go:168] LocalClient.Create starting
I0124 09:40:11.666309 7953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem
I0124 09:40:11.666390 7953 main.go:141] libmachine: Decoding PEM data...
I0124 09:40:11.666421 7953 main.go:141] libmachine: Parsing certificate...
I0124 09:40:11.666511 7953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem
I0124 09:40:11.666576 7953 main.go:141] libmachine: Decoding PEM data...
I0124 09:40:11.666595 7953 main.go:141] libmachine: Parsing certificate...
I0124 09:40:11.667819 7953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0124 09:40:11.724635 7953 cli_runner.go:211] docker network inspect ingress-addon-legacy-211000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0124 09:40:11.724758 7953 network_create.go:281] running [docker network inspect ingress-addon-legacy-211000] to gather additional debugging logs...
I0124 09:40:11.724781 7953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211000
W0124 09:40:11.778750 7953 cli_runner.go:211] docker network inspect ingress-addon-legacy-211000 returned with exit code 1
I0124 09:40:11.778779 7953 network_create.go:284] error running [docker network inspect ingress-addon-legacy-211000]: docker network inspect ingress-addon-legacy-211000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-211000
I0124 09:40:11.778794 7953 network_create.go:286] output of [docker network inspect ingress-addon-legacy-211000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-211000
** /stderr **
I0124 09:40:11.778888 7953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 09:40:11.833038 7953 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00044bb00}
I0124 09:40:11.833074 7953 network_create.go:123] attempt to create docker network ingress-addon-legacy-211000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0124 09:40:11.833146 7953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 ingress-addon-legacy-211000
I0124 09:40:11.919843 7953 network_create.go:107] docker network ingress-addon-legacy-211000 192.168.49.0/24 created
I0124 09:40:11.919880 7953 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-211000" container
I0124 09:40:11.920001 7953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0124 09:40:11.974496 7953 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-211000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --label created_by.minikube.sigs.k8s.io=true
I0124 09:40:12.031500 7953 oci.go:103] Successfully created a docker volume ingress-addon-legacy-211000
I0124 09:40:12.031641 7953 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-211000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --entrypoint /usr/bin/test -v ingress-addon-legacy-211000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
I0124 09:40:12.489067 7953 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-211000
I0124 09:40:12.489105 7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0124 09:40:12.489129 7953 kic.go:190] Starting extracting preloaded images to volume ...
I0124 09:40:12.489232 7953 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
I0124 09:40:18.531166 7953 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3057/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (6.041943179s)
I0124 09:40:18.531196 7953 kic.go:199] duration metric: took 6.042147 seconds to extract preloaded images to volume
I0124 09:40:18.531335 7953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0124 09:40:18.677155 7953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-211000 --name ingress-addon-legacy-211000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-211000 --network ingress-addon-legacy-211000 --ip 192.168.49.2 --volume ingress-addon-legacy-211000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
I0124 09:40:19.037764 7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Running}}
I0124 09:40:19.098389 7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
I0124 09:40:19.164372 7953 cli_runner.go:164] Run: docker exec ingress-addon-legacy-211000 stat /var/lib/dpkg/alternatives/iptables
I0124 09:40:19.285591 7953 oci.go:144] the created container "ingress-addon-legacy-211000" has a running status.
I0124 09:40:19.285627 7953 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa...
I0124 09:40:19.423271 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0124 09:40:19.423353 7953 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0124 09:40:19.529506 7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
I0124 09:40:19.589624 7953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0124 09:40:19.589645 7953 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-211000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0124 09:40:19.694047 7953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211000 --format={{.State.Status}}
I0124 09:40:19.752126 7953 machine.go:88] provisioning docker machine ...
I0124 09:40:19.752165 7953 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-211000"
I0124 09:40:19.752266 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:19.810362 7953 main.go:141] libmachine: Using SSH client type: native
I0124 09:40:19.810571 7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50707 <nil> <nil>}
I0124 09:40:19.810587 7953 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-211000 && echo "ingress-addon-legacy-211000" | sudo tee /etc/hostname
I0124 09:40:19.954363 7953 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-211000
I0124 09:40:19.954465 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:20.012604 7953 main.go:141] libmachine: Using SSH client type: native
I0124 09:40:20.012761 7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50707 <nil> <nil>}
I0124 09:40:20.012782 7953 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-211000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-211000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-211000' | sudo tee -a /etc/hosts;
fi
fi
I0124 09:40:20.147263 7953 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 09:40:20.147287 7953 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3057/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3057/.minikube}
I0124 09:40:20.147311 7953 ubuntu.go:177] setting up certificates
I0124 09:40:20.147317 7953 provision.go:83] configureAuth start
I0124 09:40:20.147393 7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
I0124 09:40:20.206612 7953 provision.go:138] copyHostCerts
I0124 09:40:20.206664 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
I0124 09:40:20.206721 7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem, removing ...
I0124 09:40:20.206727 7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem
I0124 09:40:20.206863 7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.pem (1078 bytes)
I0124 09:40:20.207026 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
I0124 09:40:20.207061 7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem, removing ...
I0124 09:40:20.207066 7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem
I0124 09:40:20.207143 7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/cert.pem (1123 bytes)
I0124 09:40:20.207268 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
I0124 09:40:20.207303 7953 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem, removing ...
I0124 09:40:20.207307 7953 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem
I0124 09:40:20.207375 7953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3057/.minikube/key.pem (1675 bytes)
I0124 09:40:20.207496 7953 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-211000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-211000]
I0124 09:40:20.461765 7953 provision.go:172] copyRemoteCerts
I0124 09:40:20.461823 7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 09:40:20.461878 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:20.521404 7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
I0124 09:40:20.613532 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0124 09:40:20.613620 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0124 09:40:20.631074 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0124 09:40:20.631146 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 09:40:20.648126 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem -> /etc/docker/server.pem
I0124 09:40:20.648248 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0124 09:40:20.665846 7953 provision.go:86] duration metric: configureAuth took 518.523623ms
I0124 09:40:20.665863 7953 ubuntu.go:193] setting minikube options for container-runtime
I0124 09:40:20.666012 7953 config.go:180] Loaded profile config "ingress-addon-legacy-211000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0124 09:40:20.666072 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:20.724005 7953 main.go:141] libmachine: Using SSH client type: native
I0124 09:40:20.724171 7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50707 <nil> <nil>}
I0124 09:40:20.724188 7953 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 09:40:20.861622 7953 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0124 09:40:20.861640 7953 ubuntu.go:71] root file system type: overlay
I0124 09:40:20.861804 7953 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 09:40:20.861885 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:20.919603 7953 main.go:141] libmachine: Using SSH client type: native
I0124 09:40:20.919774 7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50707 <nil> <nil>}
I0124 09:40:20.919823 7953 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 09:40:21.063158 7953 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0124 09:40:21.063289 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:21.122163 7953 main.go:141] libmachine: Using SSH client type: native
I0124 09:40:21.122308 7953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50707 <nil> <nil>}
I0124 09:40:21.122323 7953 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 09:40:21.726119 7953 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-12-15 22:25:58.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-24 17:40:21.061403580 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0124 09:40:21.726146 7953 machine.go:91] provisioned docker machine in 1.97402489s
I0124 09:40:21.726152 7953 client.go:171] LocalClient.Create took 10.060173537s
I0124 09:40:21.726169 7953 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-211000" took 10.060240414s
I0124 09:40:21.726180 7953 start.go:300] post-start starting for "ingress-addon-legacy-211000" (driver="docker")
I0124 09:40:21.726186 7953 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 09:40:21.726272 7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 09:40:21.726343 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:21.788539 7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
I0124 09:40:21.885796 7953 ssh_runner.go:195] Run: cat /etc/os-release
I0124 09:40:21.889290 7953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 09:40:21.889306 7953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 09:40:21.889318 7953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 09:40:21.889326 7953 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 09:40:21.889335 7953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/addons for local assets ...
I0124 09:40:21.889430 7953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3057/.minikube/files for local assets ...
I0124 09:40:21.889607 7953 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> 43552.pem in /etc/ssl/certs
I0124 09:40:21.889614 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> /etc/ssl/certs/43552.pem
I0124 09:40:21.889804 7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 09:40:21.896936 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /etc/ssl/certs/43552.pem (1708 bytes)
I0124 09:40:21.914317 7953 start.go:303] post-start completed in 188.129159ms
I0124 09:40:21.914914 7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
I0124 09:40:21.972516 7953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/config.json ...
I0124 09:40:21.972956 7953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 09:40:21.973038 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:22.032724 7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
I0124 09:40:22.124356 7953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 09:40:22.129022 7953 start.go:128] duration metric: createHost completed in 10.507476295s
I0124 09:40:22.129047 7953 start.go:83] releasing machines lock for "ingress-addon-legacy-211000", held for 10.507627897s
I0124 09:40:22.129136 7953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211000
I0124 09:40:22.185457 7953 ssh_runner.go:195] Run: cat /version.json
I0124 09:40:22.185486 7953 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0124 09:40:22.185529 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:22.185552 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:22.245975 7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
I0124 09:40:22.246182 7953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50707 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3057/.minikube/machines/ingress-addon-legacy-211000/id_rsa Username:docker}
I0124 09:40:22.334104 7953 ssh_runner.go:195] Run: systemctl --version
I0124 09:40:22.539612 7953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 09:40:22.544901 7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 09:40:22.564820 7953 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 09:40:22.564909 7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0124 09:40:22.578784 7953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0124 09:40:22.586425 7953 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0124 09:40:22.586438 7953 start.go:472] detecting cgroup driver to use...
I0124 09:40:22.586455 7953 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 09:40:22.586535 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 09:40:22.610283 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0124 09:40:22.621185 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 09:40:22.629441 7953 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 09:40:22.629499 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 09:40:22.637825 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 09:40:22.646425 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 09:40:22.654894 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 09:40:22.663705 7953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 09:40:22.671440 7953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 09:40:22.679811 7953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 09:40:22.687056 7953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 09:40:22.694360 7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 09:40:22.762707 7953 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 09:40:22.834083 7953 start.go:472] detecting cgroup driver to use...
I0124 09:40:22.834106 7953 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 09:40:22.834182 7953 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 09:40:22.845592 7953 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 09:40:22.845666 7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 09:40:22.856391 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0124 09:40:22.870845 7953 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 09:40:22.966831 7953 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 09:40:23.066074 7953 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 09:40:23.066107 7953 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 09:40:23.081593 7953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 09:40:23.172477 7953 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 09:40:23.382709 7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 09:40:23.412181 7953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 09:40:23.463408 7953 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
I0124 09:40:23.463583 7953 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-211000 dig +short host.docker.internal
I0124 09:40:23.579388 7953 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0124 09:40:23.579535 7953 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0124 09:40:23.584154 7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 09:40:23.593905 7953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-211000
I0124 09:40:23.651905 7953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0124 09:40:23.651994 7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0124 09:40:23.677865 7953 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0124 09:40:23.677884 7953 docker.go:560] Images already preloaded, skipping extraction
I0124 09:40:23.677975 7953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0124 09:40:23.701976 7953 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0124 09:40:23.701995 7953 cache_images.go:84] Images are preloaded, skipping loading
I0124 09:40:23.702081 7953 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 09:40:23.773109 7953 cni.go:84] Creating CNI manager for ""
I0124 09:40:23.773127 7953 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0124 09:40:23.773147 7953 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 09:40:23.773171 7953 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-211000 NodeName:ingress-addon-legacy-211000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 09:40:23.773400 7953 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-211000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0124 09:40:23.773522 7953 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 09:40:23.773586 7953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0124 09:40:23.781801 7953 binaries.go:44] Found k8s binaries, skipping transfer
I0124 09:40:23.781878 7953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0124 09:40:23.789165 7953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0124 09:40:23.802853 7953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0124 09:40:23.816147 7953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0124 09:40:23.829667 7953 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0124 09:40:23.833557 7953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 09:40:23.843928 7953 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000 for IP: 192.168.49.2
I0124 09:40:23.843947 7953 certs.go:186] acquiring lock for shared ca certs: {Name:mk86da88dcf76a0f52f98f7208bc1d04a2e55c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:23.844115 7953 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key
I0124 09:40:23.844181 7953 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key
I0124 09:40:23.844226 7953 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key
I0124 09:40:23.844241 7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt with IP's: []
I0124 09:40:23.956000 7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt ...
I0124 09:40:23.956010 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.crt: {Name:mkcc66a6a579ed07c5d0fe8005d5efbf327e4407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:23.956284 7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key ...
I0124 09:40:23.956292 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/client.key: {Name:mk9ef8aa5f6d0f635158bc9ada91e0b32146eefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:23.956472 7953 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2
I0124 09:40:23.956486 7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0124 09:40:24.272400 7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 ...
I0124 09:40:24.272414 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2: {Name:mkd5060f920228c8deffcfba869657319c9157ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:24.272708 7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2 ...
I0124 09:40:24.272716 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2: {Name:mk99b57c0bbc07148afed701e08444e3a30d05da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:24.272904 7953 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt
I0124 09:40:24.273074 7953 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key
I0124 09:40:24.273231 7953 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key
I0124 09:40:24.273246 7953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt with IP's: []
I0124 09:40:24.565401 7953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt ...
I0124 09:40:24.565416 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt: {Name:mkfcb5d6eb5e9b4779f2ecab1dce3bf7bbea2e82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:24.565720 7953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key ...
I0124 09:40:24.565730 7953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key: {Name:mk8b3ddc900a601a5b725be79f499bdb29e9666f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 09:40:24.565956 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0124 09:40:24.565984 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0124 09:40:24.566003 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0124 09:40:24.566040 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0124 09:40:24.566092 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0124 09:40:24.566127 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0124 09:40:24.566143 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0124 09:40:24.566159 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0124 09:40:24.566265 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem (1338 bytes)
W0124 09:40:24.566309 7953 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355_empty.pem, impossibly tiny 0 bytes
I0124 09:40:24.566319 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca-key.pem (1679 bytes)
I0124 09:40:24.566418 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/ca.pem (1078 bytes)
I0124 09:40:24.566448 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/cert.pem (1123 bytes)
I0124 09:40:24.566517 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/certs/key.pem (1675 bytes)
I0124 09:40:24.566596 7953 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem (1708 bytes)
I0124 09:40:24.566628 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem -> /usr/share/ca-certificates/4355.pem
I0124 09:40:24.566676 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem -> /usr/share/ca-certificates/43552.pem
I0124 09:40:24.566694 7953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0124 09:40:24.567244 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0124 09:40:24.586499 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0124 09:40:24.603788 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0124 09:40:24.621086 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/profiles/ingress-addon-legacy-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0124 09:40:24.638240 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 09:40:24.655283 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0124 09:40:24.672762 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 09:40:24.689940 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 09:40:24.707549 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/certs/4355.pem --> /usr/share/ca-certificates/4355.pem (1338 bytes)
I0124 09:40:24.725148 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/files/etc/ssl/certs/43552.pem --> /usr/share/ca-certificates/43552.pem (1708 bytes)
I0124 09:40:24.742483 7953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3057/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 09:40:24.760070 7953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0124 09:40:24.773060 7953 ssh_runner.go:195] Run: openssl version
I0124 09:40:24.778703 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 09:40:24.786939 7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 09:40:24.791068 7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:28 /usr/share/ca-certificates/minikubeCA.pem
I0124 09:40:24.791116 7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 09:40:24.796423 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 09:40:24.804506 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4355.pem && ln -fs /usr/share/ca-certificates/4355.pem /etc/ssl/certs/4355.pem"
I0124 09:40:24.813127 7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4355.pem
I0124 09:40:24.817338 7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:34 /usr/share/ca-certificates/4355.pem
I0124 09:40:24.817397 7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4355.pem
I0124 09:40:24.822826 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4355.pem /etc/ssl/certs/51391683.0"
I0124 09:40:24.831220 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43552.pem && ln -fs /usr/share/ca-certificates/43552.pem /etc/ssl/certs/43552.pem"
I0124 09:40:24.839644 7953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43552.pem
I0124 09:40:24.843676 7953 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:34 /usr/share/ca-certificates/43552.pem
I0124 09:40:24.843723 7953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43552.pem
I0124 09:40:24.849443 7953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43552.pem /etc/ssl/certs/3ec20f2e.0"
I0124 09:40:24.857924 7953 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 09:40:24.858031 7953 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0124 09:40:24.880036 7953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0124 09:40:24.888820 7953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0124 09:40:24.896337 7953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0124 09:40:24.896404 7953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0124 09:40:24.904093 7953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0124 09:40:24.904124 7953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0124 09:40:24.952248 7953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0124 09:40:24.952307 7953 kubeadm.go:322] [preflight] Running pre-flight checks
I0124 09:40:25.254148 7953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0124 09:40:25.254342 7953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0124 09:40:25.254442 7953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0124 09:40:25.478129 7953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0124 09:40:25.478589 7953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0124 09:40:25.478639 7953 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0124 09:40:25.552045 7953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0124 09:40:25.572706 7953 out.go:204] - Generating certificates and keys ...
I0124 09:40:25.572800 7953 kubeadm.go:322] [certs] Using existing ca certificate authority
I0124 09:40:25.572895 7953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0124 09:40:25.685263 7953 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0124 09:40:25.842892 7953 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0124 09:40:26.100753 7953 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0124 09:40:26.322732 7953 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0124 09:40:26.386531 7953 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0124 09:40:26.386668 7953 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0124 09:40:26.475446 7953 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0124 09:40:26.475541 7953 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0124 09:40:26.544290 7953 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0124 09:40:26.753569 7953 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0124 09:40:26.902294 7953 kubeadm.go:322] [certs] Generating "sa" key and public key
I0124 09:40:26.902355 7953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0124 09:40:27.026203 7953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0124 09:40:27.124270 7953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0124 09:40:27.185409 7953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0124 09:40:27.331587 7953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0124 09:40:27.332241 7953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0124 09:40:27.373548 7953 out.go:204] - Booting up control plane ...
I0124 09:40:27.373764 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0124 09:40:27.373908 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0124 09:40:27.374066 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0124 09:40:27.374209 7953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0124 09:40:27.374480 7953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0124 09:41:07.342039 7953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0124 09:41:07.342973 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:41:07.343203 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:41:12.344962 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:41:12.345211 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:41:22.347108 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:41:22.347361 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:41:42.347313 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:41:42.347462 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:42:22.349128 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:42:22.349446 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:42:22.349462 7953 kubeadm.go:322]
I0124 09:42:22.349508 7953 kubeadm.go:322] Unfortunately, an error has occurred:
I0124 09:42:22.349548 7953 kubeadm.go:322] timed out waiting for the condition
I0124 09:42:22.349554 7953 kubeadm.go:322]
I0124 09:42:22.349622 7953 kubeadm.go:322] This error is likely caused by:
I0124 09:42:22.349679 7953 kubeadm.go:322] - The kubelet is not running
I0124 09:42:22.349875 7953 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0124 09:42:22.349896 7953 kubeadm.go:322]
I0124 09:42:22.350053 7953 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0124 09:42:22.350103 7953 kubeadm.go:322] - 'systemctl status kubelet'
I0124 09:42:22.350142 7953 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0124 09:42:22.350150 7953 kubeadm.go:322]
I0124 09:42:22.350315 7953 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0124 09:42:22.350420 7953 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0124 09:42:22.350436 7953 kubeadm.go:322]
I0124 09:42:22.350532 7953 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0124 09:42:22.350594 7953 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0124 09:42:22.350658 7953 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0124 09:42:22.350689 7953 kubeadm.go:322] - 'docker logs CONTAINERID'
I0124 09:42:22.350697 7953 kubeadm.go:322]
I0124 09:42:22.354217 7953 kubeadm.go:322] W0124 17:40:24.951424 1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0124 09:42:22.354373 7953 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0124 09:42:22.354450 7953 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0124 09:42:22.354564 7953 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
I0124 09:42:22.354652 7953 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0124 09:42:22.354758 7953 kubeadm.go:322] W0124 17:40:27.337491 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0124 09:42:22.354856 7953 kubeadm.go:322] W0124 17:40:27.338480 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0124 09:42:22.354927 7953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0124 09:42:22.354993 7953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0124 09:42:22.355189 7953 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:40:24.951424 1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:40:27.337491 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:40:27.338480 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:40:24.951424 1167 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:40:27.337491 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:40:27.338480 1167 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0124 09:42:22.355235 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0124 09:42:22.778485 7953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0124 09:42:22.788264 7953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0124 09:42:22.788329 7953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0124 09:42:22.795632 7953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0124 09:42:22.795655 7953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0124 09:42:22.842330 7953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0124 09:42:22.842386 7953 kubeadm.go:322] [preflight] Running pre-flight checks
I0124 09:42:23.136443 7953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0124 09:42:23.136543 7953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0124 09:42:23.136629 7953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0124 09:42:23.356886 7953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0124 09:42:23.357325 7953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0124 09:42:23.357360 7953 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0124 09:42:23.428001 7953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0124 09:42:23.449498 7953 out.go:204] - Generating certificates and keys ...
I0124 09:42:23.449583 7953 kubeadm.go:322] [certs] Using existing ca certificate authority
I0124 09:42:23.449647 7953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0124 09:42:23.449750 7953 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0124 09:42:23.449817 7953 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0124 09:42:23.449869 7953 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0124 09:42:23.449912 7953 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0124 09:42:23.450003 7953 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0124 09:42:23.450062 7953 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0124 09:42:23.450126 7953 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0124 09:42:23.450227 7953 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0124 09:42:23.450267 7953 kubeadm.go:322] [certs] Using the existing "sa" key
I0124 09:42:23.450390 7953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0124 09:42:23.560138 7953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0124 09:42:23.634333 7953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0124 09:42:23.775054 7953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0124 09:42:23.968659 7953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0124 09:42:23.969308 7953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0124 09:42:23.991315 7953 out.go:204] - Booting up control plane ...
I0124 09:42:23.991471 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0124 09:42:23.991636 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0124 09:42:23.991790 7953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0124 09:42:23.991977 7953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0124 09:42:23.992266 7953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0124 09:43:03.977571 7953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0124 09:43:03.978588 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:43:03.978971 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:43:08.980161 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:43:08.980383 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:43:18.981080 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:43:18.981234 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:43:38.983193 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:43:38.983428 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:44:18.984653 7953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0124 09:44:18.984884 7953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0124 09:44:18.984895 7953 kubeadm.go:322]
I0124 09:44:18.984981 7953 kubeadm.go:322] Unfortunately, an error has occurred:
I0124 09:44:18.985038 7953 kubeadm.go:322] timed out waiting for the condition
I0124 09:44:18.985049 7953 kubeadm.go:322]
I0124 09:44:18.985096 7953 kubeadm.go:322] This error is likely caused by:
I0124 09:44:18.985144 7953 kubeadm.go:322] - The kubelet is not running
I0124 09:44:18.985274 7953 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0124 09:44:18.985289 7953 kubeadm.go:322]
I0124 09:44:18.985408 7953 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0124 09:44:18.985446 7953 kubeadm.go:322] - 'systemctl status kubelet'
I0124 09:44:18.985491 7953 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0124 09:44:18.985506 7953 kubeadm.go:322]
I0124 09:44:18.985619 7953 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0124 09:44:18.985717 7953 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0124 09:44:18.985728 7953 kubeadm.go:322]
I0124 09:44:18.985841 7953 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0124 09:44:18.985909 7953 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0124 09:44:18.985983 7953 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0124 09:44:18.986027 7953 kubeadm.go:322] - 'docker logs CONTAINERID'
I0124 09:44:18.986043 7953 kubeadm.go:322]
I0124 09:44:18.988135 7953 kubeadm.go:322] W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0124 09:44:18.988292 7953 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0124 09:44:18.988356 7953 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0124 09:44:18.988474 7953 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
I0124 09:44:18.988560 7953 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0124 09:44:18.988656 7953 kubeadm.go:322] W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0124 09:44:18.988760 7953 kubeadm.go:322] W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0124 09:44:18.988847 7953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0124 09:44:18.988915 7953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0124 09:44:18.988937 7953 kubeadm.go:403] StartCluster complete in 3m54.133734717s
I0124 09:44:18.989024 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0124 09:44:19.011615 7953 logs.go:279] 0 containers: []
W0124 09:44:19.011630 7953 logs.go:281] No container was found matching "kube-apiserver"
I0124 09:44:19.011698 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0124 09:44:19.034868 7953 logs.go:279] 0 containers: []
W0124 09:44:19.034881 7953 logs.go:281] No container was found matching "etcd"
I0124 09:44:19.034957 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0124 09:44:19.058323 7953 logs.go:279] 0 containers: []
W0124 09:44:19.058338 7953 logs.go:281] No container was found matching "coredns"
I0124 09:44:19.058414 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0124 09:44:19.080195 7953 logs.go:279] 0 containers: []
W0124 09:44:19.080213 7953 logs.go:281] No container was found matching "kube-scheduler"
I0124 09:44:19.080288 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0124 09:44:19.104179 7953 logs.go:279] 0 containers: []
W0124 09:44:19.104192 7953 logs.go:281] No container was found matching "kube-proxy"
I0124 09:44:19.104261 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0124 09:44:19.126233 7953 logs.go:279] 0 containers: []
W0124 09:44:19.126246 7953 logs.go:281] No container was found matching "kubernetes-dashboard"
I0124 09:44:19.126317 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0124 09:44:19.148423 7953 logs.go:279] 0 containers: []
W0124 09:44:19.148438 7953 logs.go:281] No container was found matching "storage-provisioner"
I0124 09:44:19.148508 7953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0124 09:44:19.171809 7953 logs.go:279] 0 containers: []
W0124 09:44:19.171823 7953 logs.go:281] No container was found matching "kube-controller-manager"
I0124 09:44:19.171836 7953 logs.go:124] Gathering logs for kubelet ...
I0124 09:44:19.171843 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0124 09:44:19.209674 7953 logs.go:124] Gathering logs for dmesg ...
I0124 09:44:19.209692 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0124 09:44:19.221759 7953 logs.go:124] Gathering logs for describe nodes ...
I0124 09:44:19.221773 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0124 09:44:19.276804 7953 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0124 09:44:19.276816 7953 logs.go:124] Gathering logs for Docker ...
I0124 09:44:19.276824 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0124 09:44:19.293935 7953 logs.go:124] Gathering logs for container status ...
I0124 09:44:19.293950 7953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0124 09:44:21.366814 7953 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.072873986s)
W0124 09:44:21.366942 7953 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0124 09:44:21.366959 7953 out.go:239] *
*
W0124 09:44:21.367080 7953 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0124 09:44:21.367094 7953 out.go:239] *
*
W0124 09:44:21.367710 7953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0124 09:44:21.430327 7953 out.go:177]
W0124 09:44:21.472586 7953 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0124 17:42:22.841893 3660 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0124 17:42:23.972891 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0124 17:42:23.973793 3660 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0124 09:44:21.472723 7953 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0124 09:44:21.472809 7953 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0124 09:44:21.494226 7953 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-211000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.06s)