=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0223 16:53:55.803766 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:54:23.496359 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/addons-106000/client.crt: no such file or directory
E0223 16:54:44.875000 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.881462 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.893596 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.914410 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:44.954724 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.035008 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.195664 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:45.515831 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:46.156235 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:47.438573 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:50.000168 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:54:55.120559 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:55:05.361515 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
E0223 16:55:25.842623 24885 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/functional-523000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m23.893518114s)
-- stdout --
* [ingress-addon-legacy-691000] minikube v1.29.0 on Darwin 13.2
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-691000 in cluster ingress-addon-legacy-691000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0223 16:51:42.330861 27829 out.go:296] Setting OutFile to fd 1 ...
I0223 16:51:42.331022 27829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 16:51:42.331032 27829 out.go:309] Setting ErrFile to fd 2...
I0223 16:51:42.331036 27829 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 16:51:42.331149 27829 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-24428/.minikube/bin
I0223 16:51:42.332560 27829 out.go:303] Setting JSON to false
I0223 16:51:42.351039 27829 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6677,"bootTime":1677193225,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0223 16:51:42.351121 27829 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0223 16:51:42.373164 27829 out.go:177] * [ingress-addon-legacy-691000] minikube v1.29.0 on Darwin 13.2
I0223 16:51:42.394401 27829 out.go:177] - MINIKUBE_LOCATION=15909
I0223 16:51:42.394367 27829 notify.go:220] Checking for updates...
I0223 16:51:42.416555 27829 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15909-24428/kubeconfig
I0223 16:51:42.438217 27829 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0223 16:51:42.460208 27829 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 16:51:42.481218 27829 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-24428/.minikube
I0223 16:51:42.502257 27829 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 16:51:42.523570 27829 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 16:51:42.583780 27829 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0223 16:51:42.583894 27829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 16:51:42.724375 27829 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:51:42.632900741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 16:51:42.746479 27829 out.go:177] * Using the docker driver based on user configuration
I0223 16:51:42.768090 27829 start.go:296] selected driver: docker
I0223 16:51:42.768117 27829 start.go:857] validating driver "docker" against <nil>
I0223 16:51:42.768135 27829 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 16:51:42.772037 27829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 16:51:42.914635 27829 info.go:266] docker info: {ID:SWKV:KMRO:OUIS:YAN3:ZG2Q:7LAB:6DZ6:VZUC:VVW3:VUUP:WZOT:LF2A Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-24 00:51:42.821663406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 16:51:42.914795 27829 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0223 16:51:42.914977 27829 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 16:51:42.936574 27829 out.go:177] * Using Docker Desktop driver with root privileges
I0223 16:51:42.958718 27829 cni.go:84] Creating CNI manager for ""
I0223 16:51:42.958758 27829 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 16:51:42.958772 27829 start_flags.go:319] config:
{Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 16:51:43.001559 27829 out.go:177] * Starting control plane node ingress-addon-legacy-691000 in cluster ingress-addon-legacy-691000
I0223 16:51:43.022515 27829 cache.go:120] Beginning downloading kic base image for docker with docker
I0223 16:51:43.044385 27829 out.go:177] * Pulling base image ...
I0223 16:51:43.086571 27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 16:51:43.086606 27829 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
I0223 16:51:43.146922 27829 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
I0223 16:51:43.146945 27829 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
I0223 16:51:43.193668 27829 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 16:51:43.193699 27829 cache.go:57] Caching tarball of preloaded images
I0223 16:51:43.194039 27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 16:51:43.215875 27829 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0223 16:51:43.257763 27829 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 16:51:43.458479 27829 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 16:51:56.118847 27829 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 16:51:56.119090 27829 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 16:51:56.790026 27829 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0223 16:51:56.790255 27829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json ...
I0223 16:51:56.790284 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json: {Name:mka594ad54848610af6d11e54032c8be3efc53f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:51:56.790580 27829 cache.go:193] Successfully downloaded all kic artifacts
I0223 16:51:56.790608 27829 start.go:364] acquiring machines lock for ingress-addon-legacy-691000: {Name:mk8657a1d89f12d943cb8e554a12c5028bc1eb5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 16:51:56.790701 27829 start.go:368] acquired machines lock for "ingress-addon-legacy-691000" in 85.148µs
I0223 16:51:56.790728 27829 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 16:51:56.790771 27829 start.go:125] createHost starting for "" (driver="docker")
I0223 16:51:56.842102 27829 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0223 16:51:56.842417 27829 start.go:159] libmachine.API.Create for "ingress-addon-legacy-691000" (driver="docker")
I0223 16:51:56.842501 27829 client.go:168] LocalClient.Create starting
I0223 16:51:56.842718 27829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem
I0223 16:51:56.842817 27829 main.go:141] libmachine: Decoding PEM data...
I0223 16:51:56.842848 27829 main.go:141] libmachine: Parsing certificate...
I0223 16:51:56.842966 27829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem
I0223 16:51:56.843038 27829 main.go:141] libmachine: Decoding PEM data...
I0223 16:51:56.843055 27829 main.go:141] libmachine: Parsing certificate...
I0223 16:51:56.843828 27829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-691000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0223 16:51:56.900201 27829 cli_runner.go:211] docker network inspect ingress-addon-legacy-691000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0223 16:51:56.900314 27829 network_create.go:281] running [docker network inspect ingress-addon-legacy-691000] to gather additional debugging logs...
I0223 16:51:56.900330 27829 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-691000
W0223 16:51:56.953733 27829 cli_runner.go:211] docker network inspect ingress-addon-legacy-691000 returned with exit code 1
I0223 16:51:56.953759 27829 network_create.go:284] error running [docker network inspect ingress-addon-legacy-691000]: docker network inspect ingress-addon-legacy-691000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-691000
I0223 16:51:56.953769 27829 network_create.go:286] output of [docker network inspect ingress-addon-legacy-691000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-691000
** /stderr **
I0223 16:51:56.953869 27829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0223 16:51:57.008174 27829 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001266d50}
I0223 16:51:57.008205 27829 network_create.go:123] attempt to create docker network ingress-addon-legacy-691000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0223 16:51:57.008274 27829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 ingress-addon-legacy-691000
I0223 16:51:57.094976 27829 network_create.go:107] docker network ingress-addon-legacy-691000 192.168.49.0/24 created
I0223 16:51:57.095022 27829 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-691000" container
I0223 16:51:57.095156 27829 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0223 16:51:57.149625 27829 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-691000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --label created_by.minikube.sigs.k8s.io=true
I0223 16:51:57.204400 27829 oci.go:103] Successfully created a docker volume ingress-addon-legacy-691000
I0223 16:51:57.204533 27829 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-691000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --entrypoint /usr/bin/test -v ingress-addon-legacy-691000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
I0223 16:51:57.659487 27829 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-691000
I0223 16:51:57.659554 27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 16:51:57.659568 27829 kic.go:190] Starting extracting preloaded images to volume ...
I0223 16:51:57.659694 27829 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-691000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
I0223 16:52:04.035759 27829 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-24428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-691000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.375793536s)
I0223 16:52:04.035779 27829 kic.go:199] duration metric: took 6.376058 seconds to extract preloaded images to volume
I0223 16:52:04.035901 27829 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0223 16:52:04.176831 27829 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-691000 --name ingress-addon-legacy-691000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-691000 --network ingress-addon-legacy-691000 --ip 192.168.49.2 --volume ingress-addon-legacy-691000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
I0223 16:52:04.526734 27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Running}}
I0223 16:52:04.639526 27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
I0223 16:52:04.701405 27829 cli_runner.go:164] Run: docker exec ingress-addon-legacy-691000 stat /var/lib/dpkg/alternatives/iptables
I0223 16:52:04.809122 27829 oci.go:144] the created container "ingress-addon-legacy-691000" has a running status.
I0223 16:52:04.809160 27829 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa...
I0223 16:52:04.893385 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0223 16:52:04.893456 27829 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0223 16:52:04.997763 27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
I0223 16:52:05.061535 27829 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0223 16:52:05.061554 27829 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-691000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0223 16:52:05.161069 27829 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-691000 --format={{.State.Status}}
I0223 16:52:05.218395 27829 machine.go:88] provisioning docker machine ...
I0223 16:52:05.218452 27829 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-691000"
I0223 16:52:05.218562 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:05.277315 27829 main.go:141] libmachine: Using SSH client type: native
I0223 16:52:05.277715 27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57528 <nil> <nil>}
I0223 16:52:05.277731 27829 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-691000 && echo "ingress-addon-legacy-691000" | sudo tee /etc/hostname
I0223 16:52:05.423375 27829 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-691000
I0223 16:52:05.423486 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:05.479830 27829 main.go:141] libmachine: Using SSH client type: native
I0223 16:52:05.480179 27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57528 <nil> <nil>}
I0223 16:52:05.480193 27829 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-691000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-691000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-691000' | sudo tee -a /etc/hosts;
fi
fi
I0223 16:52:05.616345 27829 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 16:52:05.616368 27829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-24428/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-24428/.minikube}
I0223 16:52:05.616387 27829 ubuntu.go:177] setting up certificates
I0223 16:52:05.616392 27829 provision.go:83] configureAuth start
I0223 16:52:05.616466 27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
I0223 16:52:05.672597 27829 provision.go:138] copyHostCerts
I0223 16:52:05.672641 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
I0223 16:52:05.672697 27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem, removing ...
I0223 16:52:05.672706 27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem
I0223 16:52:05.672811 27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.pem (1082 bytes)
I0223 16:52:05.672979 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
I0223 16:52:05.673015 27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem, removing ...
I0223 16:52:05.673020 27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem
I0223 16:52:05.673084 27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/cert.pem (1123 bytes)
I0223 16:52:05.673211 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
I0223 16:52:05.673256 27829 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem, removing ...
I0223 16:52:05.673262 27829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem
I0223 16:52:05.673323 27829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-24428/.minikube/key.pem (1679 bytes)
I0223 16:52:05.673457 27829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-691000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-691000]
I0223 16:52:05.815029 27829 provision.go:172] copyRemoteCerts
I0223 16:52:05.815093 27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 16:52:05.815149 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:05.871932 27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
I0223 16:52:05.968616 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 16:52:05.968710 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0223 16:52:05.986361 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 16:52:05.986443 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0223 16:52:06.004761 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 16:52:06.004838 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 16:52:06.022014 27829 provision.go:86] duration metric: configureAuth took 405.596663ms
I0223 16:52:06.022034 27829 ubuntu.go:193] setting minikube options for container-runtime
I0223 16:52:06.022193 27829 config.go:182] Loaded profile config "ingress-addon-legacy-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0223 16:52:06.022263 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:06.080229 27829 main.go:141] libmachine: Using SSH client type: native
I0223 16:52:06.080591 27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57528 <nil> <nil>}
I0223 16:52:06.080609 27829 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 16:52:06.214396 27829 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0223 16:52:06.214416 27829 ubuntu.go:71] root file system type: overlay
I0223 16:52:06.214509 27829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 16:52:06.214593 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:06.271367 27829 main.go:141] libmachine: Using SSH client type: native
I0223 16:52:06.271726 27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57528 <nil> <nil>}
I0223 16:52:06.271774 27829 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 16:52:06.415765 27829 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 16:52:06.415859 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:06.473267 27829 main.go:141] libmachine: Using SSH client type: native
I0223 16:52:06.473618 27829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57528 <nil> <nil>}
I0223 16:52:06.473632 27829 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 16:52:07.121227 27829 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-24 00:52:06.412418193 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0223 16:52:07.121272 27829 machine.go:91] provisioned docker machine in 1.902775682s
I0223 16:52:07.121279 27829 client.go:171] LocalClient.Create took 10.278521565s
I0223 16:52:07.121303 27829 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-691000" took 10.278641491s
I0223 16:52:07.121315 27829 start.go:300] post-start starting for "ingress-addon-legacy-691000" (driver="docker")
I0223 16:52:07.121320 27829 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 16:52:07.121437 27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 16:52:07.121531 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:07.183236 27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
I0223 16:52:07.279272 27829 ssh_runner.go:195] Run: cat /etc/os-release
I0223 16:52:07.282912 27829 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0223 16:52:07.282932 27829 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0223 16:52:07.282939 27829 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0223 16:52:07.282945 27829 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0223 16:52:07.282957 27829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/addons for local assets ...
I0223 16:52:07.283067 27829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-24428/.minikube/files for local assets ...
I0223 16:52:07.283240 27829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> 248852.pem in /etc/ssl/certs
I0223 16:52:07.283248 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /etc/ssl/certs/248852.pem
I0223 16:52:07.283439 27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 16:52:07.290917 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /etc/ssl/certs/248852.pem (1708 bytes)
I0223 16:52:07.309069 27829 start.go:303] post-start completed in 187.731223ms
I0223 16:52:07.309604 27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
I0223 16:52:07.369800 27829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/config.json ...
I0223 16:52:07.370294 27829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0223 16:52:07.370370 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:07.427650 27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
I0223 16:52:07.523046 27829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0223 16:52:07.527917 27829 start.go:128] duration metric: createHost completed in 10.736876549s
I0223 16:52:07.527938 27829 start.go:83] releasing machines lock for "ingress-addon-legacy-691000", held for 10.736967972s
I0223 16:52:07.528034 27829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-691000
I0223 16:52:07.585355 27829 ssh_runner.go:195] Run: cat /version.json
I0223 16:52:07.585388 27829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0223 16:52:07.585426 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:07.585475 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:07.648661 27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
I0223 16:52:07.648898 27829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-24428/.minikube/machines/ingress-addon-legacy-691000/id_rsa Username:docker}
I0223 16:52:07.740553 27829 ssh_runner.go:195] Run: systemctl --version
I0223 16:52:08.002210 27829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 16:52:08.007538 27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0223 16:52:08.027598 27829 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0223 16:52:08.027681 27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0223 16:52:08.041564 27829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0223 16:52:08.049470 27829 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0223 16:52:08.049485 27829 start.go:485] detecting cgroup driver to use...
I0223 16:52:08.049496 27829 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 16:52:08.049574 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 16:52:08.062788 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0223 16:52:08.071363 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 16:52:08.079741 27829 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 16:52:08.079798 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 16:52:08.088356 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 16:52:08.097037 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 16:52:08.105846 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 16:52:08.114332 27829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 16:52:08.122354 27829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 16:52:08.130786 27829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 16:52:08.138232 27829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 16:52:08.145921 27829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 16:52:08.215543 27829 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 16:52:08.285440 27829 start.go:485] detecting cgroup driver to use...
I0223 16:52:08.285461 27829 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 16:52:08.285538 27829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 16:52:08.299424 27829 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0223 16:52:08.299491 27829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 16:52:08.310802 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0223 16:52:08.324518 27829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 16:52:08.388955 27829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 16:52:08.489621 27829 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 16:52:08.489640 27829 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 16:52:08.503856 27829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 16:52:08.598167 27829 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 16:52:08.841237 27829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 16:52:08.868118 27829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 16:52:08.916583 27829 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0223 16:52:08.916744 27829 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-691000 dig +short host.docker.internal
I0223 16:52:09.032899 27829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0223 16:52:09.033014 27829 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0223 16:52:09.037842 27829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 16:52:09.048692 27829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-691000
I0223 16:52:09.107753 27829 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 16:52:09.107840 27829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 16:52:09.128434 27829 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 16:52:09.128452 27829 docker.go:560] Images already preloaded, skipping extraction
I0223 16:52:09.128540 27829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 16:52:09.150034 27829 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 16:52:09.150050 27829 cache_images.go:84] Images are preloaded, skipping loading
I0223 16:52:09.150151 27829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 16:52:09.175862 27829 cni.go:84] Creating CNI manager for ""
I0223 16:52:09.175880 27829 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 16:52:09.175895 27829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 16:52:09.175909 27829 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-691000 NodeName:ingress-addon-legacy-691000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 16:52:09.176023 27829 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-691000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 16:52:09.176107 27829 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-691000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 16:52:09.176167 27829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0223 16:52:09.184267 27829 binaries.go:44] Found k8s binaries, skipping transfer
I0223 16:52:09.184325 27829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 16:52:09.192988 27829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0223 16:52:09.208585 27829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0223 16:52:09.222134 27829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0223 16:52:09.235490 27829 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0223 16:52:09.240304 27829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 16:52:09.250638 27829 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000 for IP: 192.168.49.2
I0223 16:52:09.250660 27829 certs.go:186] acquiring lock for shared ca certs: {Name:mka4f8a2d0723293f88499f80fb83a53e78a6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.250840 27829 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key
I0223 16:52:09.250899 27829 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key
I0223 16:52:09.250946 27829 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key
I0223 16:52:09.250958 27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt with IP's: []
I0223 16:52:09.352824 27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt ...
I0223 16:52:09.352839 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.crt: {Name:mkf4a38d775d0b7de4649fb0074f3eec41a516ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.353149 27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key ...
I0223 16:52:09.353158 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/client.key: {Name:mkf91f763e830a3815d402105422bcece62e2244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.353363 27829 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2
I0223 16:52:09.353380 27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0223 16:52:09.432508 27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 ...
I0223 16:52:09.432521 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2: {Name:mk614948d8a87630582cfd9d4f25e3c57c069cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.432824 27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2 ...
I0223 16:52:09.432832 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2: {Name:mk8243386d48e231dfda2178217165573ac326e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.433023 27829 certs.go:333] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt
I0223 16:52:09.433196 27829 certs.go:337] copying /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key
I0223 16:52:09.433365 27829 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key
I0223 16:52:09.433380 27829 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt with IP's: []
I0223 16:52:09.549410 27829 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt ...
I0223 16:52:09.549424 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt: {Name:mk3248da3723150da08b3caa3ba0766e319a02a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.567970 27829 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key ...
I0223 16:52:09.568002 27829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key: {Name:mkc36b64e51acc0589a7c6bce01544c39b448cfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 16:52:09.590352 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 16:52:09.590438 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 16:52:09.590482 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 16:52:09.590522 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 16:52:09.590561 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 16:52:09.590598 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 16:52:09.590634 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 16:52:09.590668 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 16:52:09.590835 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem (1338 bytes)
W0223 16:52:09.590935 27829 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885_empty.pem, impossibly tiny 0 bytes
I0223 16:52:09.590966 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca-key.pem (1675 bytes)
I0223 16:52:09.591042 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/ca.pem (1082 bytes)
I0223 16:52:09.591126 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/cert.pem (1123 bytes)
I0223 16:52:09.591174 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/certs/key.pem (1679 bytes)
I0223 16:52:09.591269 27829 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem (1708 bytes)
I0223 16:52:09.591312 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 16:52:09.591339 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem -> /usr/share/ca-certificates/24885.pem
I0223 16:52:09.591364 27829 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem -> /usr/share/ca-certificates/248852.pem
I0223 16:52:09.592023 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 16:52:09.610797 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0223 16:52:09.627908 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 16:52:09.645477 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/profiles/ingress-addon-legacy-691000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 16:52:09.663562 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 16:52:09.680796 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0223 16:52:09.699445 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 16:52:09.718105 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0223 16:52:09.735290 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 16:52:09.753419 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/certs/24885.pem --> /usr/share/ca-certificates/24885.pem (1338 bytes)
I0223 16:52:09.770680 27829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-24428/.minikube/files/etc/ssl/certs/248852.pem --> /usr/share/ca-certificates/248852.pem (1708 bytes)
I0223 16:52:09.788221 27829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 16:52:09.802626 27829 ssh_runner.go:195] Run: openssl version
I0223 16:52:09.808270 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24885.pem && ln -fs /usr/share/ca-certificates/24885.pem /etc/ssl/certs/24885.pem"
I0223 16:52:09.816379 27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24885.pem
I0223 16:52:09.820269 27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/24885.pem
I0223 16:52:09.820325 27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24885.pem
I0223 16:52:09.825919 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24885.pem /etc/ssl/certs/51391683.0"
I0223 16:52:09.834258 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/248852.pem && ln -fs /usr/share/ca-certificates/248852.pem /etc/ssl/certs/248852.pem"
I0223 16:52:09.842908 27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/248852.pem
I0223 16:52:09.847687 27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/248852.pem
I0223 16:52:09.847753 27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/248852.pem
I0223 16:52:09.853444 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/248852.pem /etc/ssl/certs/3ec20f2e.0"
I0223 16:52:09.861603 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 16:52:09.869839 27829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 16:52:09.873871 27829 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
I0223 16:52:09.873921 27829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 16:52:09.879227 27829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 16:52:09.887494 27829 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-691000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 16:52:09.887606 27829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 16:52:09.907703 27829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 16:52:09.915720 27829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 16:52:09.923117 27829 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 16:52:09.923178 27829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 16:52:09.930732 27829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 16:52:09.930767 27829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 16:52:09.981658 27829 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 16:52:09.981712 27829 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 16:52:10.149872 27829 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 16:52:10.149957 27829 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 16:52:10.150029 27829 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 16:52:10.306960 27829 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 16:52:10.307460 27829 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 16:52:10.307520 27829 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 16:52:10.384782 27829 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 16:52:10.427168 27829 out.go:204] - Generating certificates and keys ...
I0223 16:52:10.427258 27829 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 16:52:10.427320 27829 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 16:52:10.459532 27829 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0223 16:52:10.604137 27829 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0223 16:52:10.680105 27829 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0223 16:52:10.760856 27829 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0223 16:52:10.898702 27829 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0223 16:52:10.898826 27829 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 16:52:11.097799 27829 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0223 16:52:11.097967 27829 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 16:52:11.327065 27829 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0223 16:52:11.434865 27829 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0223 16:52:11.723209 27829 kubeadm.go:322] [certs] Generating "sa" key and public key
I0223 16:52:11.723282 27829 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 16:52:11.882310 27829 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 16:52:11.983532 27829 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 16:52:12.206122 27829 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 16:52:12.340997 27829 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 16:52:12.359302 27829 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 16:52:12.379710 27829 out.go:204] - Booting up control plane ...
I0223 16:52:12.379931 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 16:52:12.380129 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 16:52:12.380317 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 16:52:12.380457 27829 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 16:52:12.380718 27829 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 16:52:52.351203 27829 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 16:52:52.357264 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:52:52.357466 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:52:57.353104 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:52:57.354661 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:53:07.355162 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:53:07.355370 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:53:27.357356 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:53:27.357593 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:54:07.360166 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:54:07.360387 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:54:07.360421 27829 kubeadm.go:322]
I0223 16:54:07.360484 27829 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 16:54:07.360542 27829 kubeadm.go:322] timed out waiting for the condition
I0223 16:54:07.360559 27829 kubeadm.go:322]
I0223 16:54:07.360600 27829 kubeadm.go:322] This error is likely caused by:
I0223 16:54:07.360637 27829 kubeadm.go:322] - The kubelet is not running
I0223 16:54:07.360766 27829 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 16:54:07.360785 27829 kubeadm.go:322]
I0223 16:54:07.360916 27829 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 16:54:07.360963 27829 kubeadm.go:322] - 'systemctl status kubelet'
I0223 16:54:07.361000 27829 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 16:54:07.361005 27829 kubeadm.go:322]
I0223 16:54:07.361184 27829 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 16:54:07.361279 27829 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 16:54:07.361297 27829 kubeadm.go:322]
I0223 16:54:07.361401 27829 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 16:54:07.361461 27829 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 16:54:07.361594 27829 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 16:54:07.361633 27829 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 16:54:07.361651 27829 kubeadm.go:322]
I0223 16:54:07.365064 27829 kubeadm.go:322] W0224 00:52:09.980624 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 16:54:07.365222 27829 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 16:54:07.365287 27829 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 16:54:07.365400 27829 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 16:54:07.365495 27829 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 16:54:07.365607 27829 kubeadm.go:322] W0224 00:52:12.345572 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 16:54:07.365748 27829 kubeadm.go:322] W0224 00:52:12.346273 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 16:54:07.365826 27829 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 16:54:07.365899 27829 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0223 16:54:07.366097 27829 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:52:09.980624 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:52:12.345572 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:52:12.346273 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-691000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:52:09.980624 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:52:12.345572 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:52:12.346273 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0223 16:54:07.366130 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0223 16:54:07.777324 27829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 16:54:07.786991 27829 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 16:54:07.787048 27829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 16:54:07.794471 27829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 16:54:07.794492 27829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 16:54:07.841229 27829 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 16:54:07.841290 27829 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 16:54:08.000421 27829 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 16:54:08.000512 27829 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 16:54:08.000600 27829 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 16:54:08.151909 27829 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 16:54:08.152579 27829 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 16:54:08.152620 27829 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 16:54:08.226589 27829 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 16:54:08.248030 27829 out.go:204] - Generating certificates and keys ...
I0223 16:54:08.248152 27829 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 16:54:08.248220 27829 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 16:54:08.248286 27829 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 16:54:08.248341 27829 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0223 16:54:08.248395 27829 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0223 16:54:08.248453 27829 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0223 16:54:08.248535 27829 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0223 16:54:08.248588 27829 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0223 16:54:08.248653 27829 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 16:54:08.248728 27829 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 16:54:08.248759 27829 kubeadm.go:322] [certs] Using the existing "sa" key
I0223 16:54:08.248844 27829 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 16:54:08.289940 27829 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 16:54:08.415232 27829 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 16:54:08.519711 27829 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 16:54:08.634303 27829 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 16:54:08.634640 27829 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 16:54:08.656511 27829 out.go:204] - Booting up control plane ...
I0223 16:54:08.656815 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 16:54:08.656974 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 16:54:08.657086 27829 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 16:54:08.657227 27829 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 16:54:08.657499 27829 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 16:54:48.644721 27829 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 16:54:48.645420 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:54:48.645641 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:54:53.645656 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:54:53.645825 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:55:03.648224 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:55:03.648447 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:55:23.649243 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:55:23.649534 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:56:03.652480 27829 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 16:56:03.652701 27829 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 16:56:03.652718 27829 kubeadm.go:322]
I0223 16:56:03.652762 27829 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 16:56:03.652806 27829 kubeadm.go:322] timed out waiting for the condition
I0223 16:56:03.652813 27829 kubeadm.go:322]
I0223 16:56:03.652850 27829 kubeadm.go:322] This error is likely caused by:
I0223 16:56:03.652885 27829 kubeadm.go:322] - The kubelet is not running
I0223 16:56:03.652989 27829 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 16:56:03.652996 27829 kubeadm.go:322]
I0223 16:56:03.653132 27829 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 16:56:03.653193 27829 kubeadm.go:322] - 'systemctl status kubelet'
I0223 16:56:03.653236 27829 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 16:56:03.653244 27829 kubeadm.go:322]
I0223 16:56:03.653351 27829 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 16:56:03.653443 27829 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 16:56:03.653450 27829 kubeadm.go:322]
I0223 16:56:03.653558 27829 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 16:56:03.653617 27829 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 16:56:03.653708 27829 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 16:56:03.653740 27829 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 16:56:03.653749 27829 kubeadm.go:322]
I0223 16:56:03.655975 27829 kubeadm.go:322] W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 16:56:03.656132 27829 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 16:56:03.656222 27829 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 16:56:03.656322 27829 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 16:56:03.656396 27829 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 16:56:03.656503 27829 kubeadm.go:322] W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 16:56:03.656595 27829 kubeadm.go:322] W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 16:56:03.656665 27829 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 16:56:03.656732 27829 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0223 16:56:03.656765 27829 kubeadm.go:403] StartCluster complete in 3m53.763660891s
I0223 16:56:03.656854 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0223 16:56:03.677229 27829 logs.go:277] 0 containers: []
W0223 16:56:03.677248 27829 logs.go:279] No container was found matching "kube-apiserver"
I0223 16:56:03.677332 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0223 16:56:03.696350 27829 logs.go:277] 0 containers: []
W0223 16:56:03.696366 27829 logs.go:279] No container was found matching "etcd"
I0223 16:56:03.696432 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0223 16:56:03.716285 27829 logs.go:277] 0 containers: []
W0223 16:56:03.716298 27829 logs.go:279] No container was found matching "coredns"
I0223 16:56:03.716370 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0223 16:56:03.735820 27829 logs.go:277] 0 containers: []
W0223 16:56:03.735833 27829 logs.go:279] No container was found matching "kube-scheduler"
I0223 16:56:03.735904 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0223 16:56:03.754365 27829 logs.go:277] 0 containers: []
W0223 16:56:03.754388 27829 logs.go:279] No container was found matching "kube-proxy"
I0223 16:56:03.754462 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0223 16:56:03.774370 27829 logs.go:277] 0 containers: []
W0223 16:56:03.774384 27829 logs.go:279] No container was found matching "kube-controller-manager"
I0223 16:56:03.774460 27829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0223 16:56:03.793239 27829 logs.go:277] 0 containers: []
W0223 16:56:03.793253 27829 logs.go:279] No container was found matching "kindnet"
I0223 16:56:03.793261 27829 logs.go:123] Gathering logs for kubelet ...
I0223 16:56:03.793268 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0223 16:56:03.832301 27829 logs.go:123] Gathering logs for dmesg ...
I0223 16:56:03.832315 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0223 16:56:03.846105 27829 logs.go:123] Gathering logs for describe nodes ...
I0223 16:56:03.846118 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0223 16:56:03.899041 27829 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0223 16:56:03.899052 27829 logs.go:123] Gathering logs for Docker ...
I0223 16:56:03.899060 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0223 16:56:03.922973 27829 logs.go:123] Gathering logs for container status ...
I0223 16:56:03.922986 27829 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0223 16:56:05.971858 27829 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048810956s)
W0223 16:56:05.971982 27829 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 16:56:05.971999 27829 out.go:239] *
*
W0223 16:56:05.972130 27829 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 16:56:05.972143 27829 out.go:239] *
*
W0223 16:56:05.972784 27829 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 16:56:06.037647 27829 out.go:177]
W0223 16:56:06.101584 27829 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 00:54:07.840130 3548 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 00:54:08.637741 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 00:54:08.638440 3548 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 16:56:06.101754 27829 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0223 16:56:06.101822 27829 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0223 16:56:06.123458 27829 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-691000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (263.93s)