=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0224 14:55:10.302785 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:55:37.996010 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:55:54.062901 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.069298 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.081452 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.102145 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.143589 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.224738 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.384912 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.705149 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:55.347512 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:56.627934 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:59.188486 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:04.309901 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:14.551737 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:35.032477 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:57:15.995058 26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m24.607272475s)
-- stdout --
* [ingress-addon-legacy-721000] minikube v1.29.0 on Darwin 13.2.1
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-721000 in cluster ingress-addon-legacy-721000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0224 14:52:58.181934 30059 out.go:296] Setting OutFile to fd 1 ...
I0224 14:52:58.182105 30059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 14:52:58.182110 30059 out.go:309] Setting ErrFile to fd 2...
I0224 14:52:58.182114 30059 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 14:52:58.182225 30059 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
I0224 14:52:58.183600 30059 out.go:303] Setting JSON to false
I0224 14:52:58.201854 30059 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6752,"bootTime":1677272426,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0224 14:52:58.201955 30059 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0224 14:52:58.223317 30059 out.go:177] * [ingress-addon-legacy-721000] minikube v1.29.0 on Darwin 13.2.1
I0224 14:52:58.266413 30059 out.go:177] - MINIKUBE_LOCATION=15909
I0224 14:52:58.266431 30059 notify.go:220] Checking for updates...
I0224 14:52:58.310399 30059 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
I0224 14:52:58.332542 30059 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0224 14:52:58.354373 30059 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0224 14:52:58.376569 30059 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
I0224 14:52:58.398727 30059 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0224 14:52:58.420804 30059 driver.go:365] Setting default libvirt URI to qemu:///system
I0224 14:52:58.482054 30059 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0224 14:52:58.482185 30059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 14:52:58.623018 30059 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:52:58.531784684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0224 14:52:58.645517 30059 out.go:177] * Using the docker driver based on user configuration
I0224 14:52:58.671826 30059 start.go:296] selected driver: docker
I0224 14:52:58.671855 30059 start.go:857] validating driver "docker" against <nil>
I0224 14:52:58.671876 30059 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0224 14:52:58.675800 30059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0224 14:52:58.816885 30059 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:52:58.72579646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0224 14:52:58.817014 30059 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0224 14:52:58.817197 30059 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0224 14:52:58.838927 30059 out.go:177] * Using Docker Desktop driver with root privileges
I0224 14:52:58.860823 30059 cni.go:84] Creating CNI manager for ""
I0224 14:52:58.860876 30059 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0224 14:52:58.860893 30059 start_flags.go:319] config:
{Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 14:52:58.904250 30059 out.go:177] * Starting control plane node ingress-addon-legacy-721000 in cluster ingress-addon-legacy-721000
I0224 14:52:58.925628 30059 cache.go:120] Beginning downloading kic base image for docker with docker
I0224 14:52:58.946517 30059 out.go:177] * Pulling base image ...
I0224 14:52:58.988704 30059 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
I0224 14:52:58.988707 30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0224 14:52:59.046143 30059 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
I0224 14:52:59.046166 30059 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
I0224 14:52:59.097159 30059 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0224 14:52:59.097237 30059 cache.go:57] Caching tarball of preloaded images
I0224 14:52:59.097728 30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0224 14:52:59.119627 30059 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0224 14:52:59.162334 30059 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0224 14:52:59.377767 30059 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0224 14:53:11.832816 30059 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0224 14:53:11.832994 30059 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0224 14:53:12.441239 30059 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0224 14:53:12.441544 30059 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json ...
I0224 14:53:12.441572 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json: {Name:mk52ddce85e7b1119aa1adde8d4c66620a5d3735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:12.441931 30059 cache.go:193] Successfully downloaded all kic artifacts
I0224 14:53:12.441956 30059 start.go:364] acquiring machines lock for ingress-addon-legacy-721000: {Name:mkf84ae28139f6b533abe522fecd4e33229d5580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 14:53:12.442116 30059 start.go:368] acquired machines lock for "ingress-addon-legacy-721000" in 153.124µs
I0224 14:53:12.442139 30059 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0224 14:53:12.442244 30059 start.go:125] createHost starting for "" (driver="docker")
I0224 14:53:12.505380 30059 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0224 14:53:12.505660 30059 start.go:159] libmachine.API.Create for "ingress-addon-legacy-721000" (driver="docker")
I0224 14:53:12.505704 30059 client.go:168] LocalClient.Create starting
I0224 14:53:12.505905 30059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
I0224 14:53:12.505997 30059 main.go:141] libmachine: Decoding PEM data...
I0224 14:53:12.506034 30059 main.go:141] libmachine: Parsing certificate...
I0224 14:53:12.506140 30059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
I0224 14:53:12.506202 30059 main.go:141] libmachine: Decoding PEM data...
I0224 14:53:12.506219 30059 main.go:141] libmachine: Parsing certificate...
I0224 14:53:12.507008 30059 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-721000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0224 14:53:12.564667 30059 cli_runner.go:211] docker network inspect ingress-addon-legacy-721000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0224 14:53:12.564776 30059 network_create.go:281] running [docker network inspect ingress-addon-legacy-721000] to gather additional debugging logs...
I0224 14:53:12.564794 30059 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-721000
W0224 14:53:12.618658 30059 cli_runner.go:211] docker network inspect ingress-addon-legacy-721000 returned with exit code 1
I0224 14:53:12.618683 30059 network_create.go:284] error running [docker network inspect ingress-addon-legacy-721000]: docker network inspect ingress-addon-legacy-721000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-721000
I0224 14:53:12.618703 30059 network_create.go:286] output of [docker network inspect ingress-addon-legacy-721000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-721000
** /stderr **
I0224 14:53:12.618798 30059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0224 14:53:12.673013 30059 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00052d630}
I0224 14:53:12.673045 30059 network_create.go:123] attempt to create docker network ingress-addon-legacy-721000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0224 14:53:12.673113 30059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 ingress-addon-legacy-721000
I0224 14:53:12.761514 30059 network_create.go:107] docker network ingress-addon-legacy-721000 192.168.49.0/24 created
I0224 14:53:12.761550 30059 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-721000" container
I0224 14:53:12.761662 30059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0224 14:53:12.816351 30059 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-721000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --label created_by.minikube.sigs.k8s.io=true
I0224 14:53:12.873125 30059 oci.go:103] Successfully created a docker volume ingress-addon-legacy-721000
I0224 14:53:12.873250 30059 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-721000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --entrypoint /usr/bin/test -v ingress-addon-legacy-721000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
I0224 14:53:13.301802 30059 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-721000
I0224 14:53:13.301848 30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0224 14:53:13.301863 30059 kic.go:190] Starting extracting preloaded images to volume ...
I0224 14:53:13.301986 30059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-721000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
I0224 14:53:19.681973 30059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-721000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.379750721s)
I0224 14:53:19.681993 30059 kic.go:199] duration metric: took 6.380015 seconds to extract preloaded images to volume
I0224 14:53:19.682114 30059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0224 14:53:19.827636 30059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-721000 --name ingress-addon-legacy-721000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --network ingress-addon-legacy-721000 --ip 192.168.49.2 --volume ingress-addon-legacy-721000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
I0224 14:53:20.302105 30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Running}}
I0224 14:53:20.363536 30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
I0224 14:53:20.425966 30059 cli_runner.go:164] Run: docker exec ingress-addon-legacy-721000 stat /var/lib/dpkg/alternatives/iptables
I0224 14:53:20.542647 30059 oci.go:144] the created container "ingress-addon-legacy-721000" has a running status.
I0224 14:53:20.542681 30059 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa...
I0224 14:53:20.600825 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0224 14:53:20.600915 30059 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0224 14:53:20.710672 30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
I0224 14:53:20.772336 30059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0224 14:53:20.772358 30059 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-721000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0224 14:53:20.877755 30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
I0224 14:53:20.935523 30059 machine.go:88] provisioning docker machine ...
I0224 14:53:20.935566 30059 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-721000"
I0224 14:53:20.935668 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:20.992840 30059 main.go:141] libmachine: Using SSH client type: native
I0224 14:53:20.993240 30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57495 <nil> <nil>}
I0224 14:53:20.993258 30059 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-721000 && echo "ingress-addon-legacy-721000" | sudo tee /etc/hostname
I0224 14:53:21.137995 30059 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-721000
I0224 14:53:21.138091 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:21.195659 30059 main.go:141] libmachine: Using SSH client type: native
I0224 14:53:21.196010 30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57495 <nil> <nil>}
I0224 14:53:21.196026 30059 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-721000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-721000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-721000' | sudo tee -a /etc/hosts;
fi
fi
I0224 14:53:21.331659 30059 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0224 14:53:21.331681 30059 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
I0224 14:53:21.331701 30059 ubuntu.go:177] setting up certificates
I0224 14:53:21.331706 30059 provision.go:83] configureAuth start
I0224 14:53:21.331777 30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
I0224 14:53:21.388148 30059 provision.go:138] copyHostCerts
I0224 14:53:21.388196 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
I0224 14:53:21.388258 30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
I0224 14:53:21.388266 30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
I0224 14:53:21.388373 30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
I0224 14:53:21.388534 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
I0224 14:53:21.388565 30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
I0224 14:53:21.388570 30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
I0224 14:53:21.388632 30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
I0224 14:53:21.388748 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
I0224 14:53:21.388785 30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
I0224 14:53:21.388791 30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
I0224 14:53:21.388854 30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
I0224 14:53:21.388975 30059 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-721000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-721000]
I0224 14:53:21.637517 30059 provision.go:172] copyRemoteCerts
I0224 14:53:21.637575 30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 14:53:21.637631 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:21.695767 30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
I0224 14:53:21.791725 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
I0224 14:53:21.791812 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0224 14:53:21.808984 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0224 14:53:21.809056 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0224 14:53:21.826214 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0224 14:53:21.826281 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 14:53:21.843433 30059 provision.go:86] duration metric: configureAuth took 511.700655ms
I0224 14:53:21.843448 30059 ubuntu.go:193] setting minikube options for container-runtime
I0224 14:53:21.843612 30059 config.go:182] Loaded profile config "ingress-addon-legacy-721000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0224 14:53:21.843681 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:21.902376 30059 main.go:141] libmachine: Using SSH client type: native
I0224 14:53:21.902750 30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57495 <nil> <nil>}
I0224 14:53:21.902767 30059 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 14:53:22.039549 30059 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0224 14:53:22.039561 30059 ubuntu.go:71] root file system type: overlay
I0224 14:53:22.039652 30059 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 14:53:22.039744 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:22.095247 30059 main.go:141] libmachine: Using SSH client type: native
I0224 14:53:22.095608 30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57495 <nil> <nil>}
I0224 14:53:22.095655 30059 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 14:53:22.238926 30059 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0224 14:53:22.239016 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:22.295941 30059 main.go:141] libmachine: Using SSH client type: native
I0224 14:53:22.296294 30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 57495 <nil> <nil>}
I0224 14:53:22.296306 30059 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 14:53:22.918023 30059 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-24 22:53:22.235900562 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0224 14:53:22.918049 30059 machine.go:91] provisioned docker machine in 1.982471367s
I0224 14:53:22.918055 30059 client.go:171] LocalClient.Create took 10.41215729s
I0224 14:53:22.918075 30059 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-721000" took 10.412228162s
I0224 14:53:22.918087 30059 start.go:300] post-start starting for "ingress-addon-legacy-721000" (driver="docker")
I0224 14:53:22.918094 30059 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 14:53:22.918181 30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 14:53:22.918241 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:22.976772 30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
I0224 14:53:23.074331 30059 ssh_runner.go:195] Run: cat /etc/os-release
I0224 14:53:23.078014 30059 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0224 14:53:23.078030 30059 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0224 14:53:23.078037 30059 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0224 14:53:23.078042 30059 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0224 14:53:23.078054 30059 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
I0224 14:53:23.078155 30059 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
I0224 14:53:23.078322 30059 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
I0224 14:53:23.078328 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
I0224 14:53:23.078521 30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 14:53:23.085778 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
I0224 14:53:23.102853 30059 start.go:303] post-start completed in 184.75225ms
I0224 14:53:23.103385 30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
I0224 14:53:23.159511 30059 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json ...
I0224 14:53:23.159937 30059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0224 14:53:23.159992 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:23.217122 30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
I0224 14:53:23.308695 30059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0224 14:53:23.313580 30059 start.go:128] duration metric: createHost completed in 10.871113692s
I0224 14:53:23.313599 30059 start.go:83] releasing machines lock for "ingress-addon-legacy-721000", held for 10.871278695s
I0224 14:53:23.313684 30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
I0224 14:53:23.370591 30059 ssh_runner.go:195] Run: cat /version.json
I0224 14:53:23.370624 30059 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0224 14:53:23.370671 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:23.370693 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:23.432268 30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
I0224 14:53:23.432457 30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
I0224 14:53:23.525315 30059 ssh_runner.go:195] Run: systemctl --version
I0224 14:53:23.733260 30059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0224 14:53:23.738417 30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0224 14:53:23.758420 30059 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0224 14:53:23.758490 30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0224 14:53:23.772218 30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0224 14:53:23.779905 30059 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0224 14:53:23.779918 30059 start.go:485] detecting cgroup driver to use...
I0224 14:53:23.779928 30059 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0224 14:53:23.780008 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 14:53:23.793245 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0224 14:53:23.801898 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 14:53:23.810209 30059 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 14:53:23.810267 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 14:53:23.818908 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 14:53:23.827427 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 14:53:23.835826 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 14:53:23.844264 30059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 14:53:23.852293 30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 14:53:23.860664 30059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 14:53:23.867842 30059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 14:53:23.874870 30059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 14:53:23.938881 30059 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 14:53:24.015057 30059 start.go:485] detecting cgroup driver to use...
I0224 14:53:24.015075 30059 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0224 14:53:24.015153 30059 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 14:53:24.025726 30059 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0224 14:53:24.025796 30059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 14:53:24.036759 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0224 14:53:24.051055 30059 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 14:53:24.157945 30059 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 14:53:24.250060 30059 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 14:53:24.250092 30059 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 14:53:24.264107 30059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 14:53:24.360724 30059 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 14:53:24.584636 30059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 14:53:24.611969 30059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0224 14:53:24.659936 30059 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0224 14:53:24.660161 30059 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-721000 dig +short host.docker.internal
I0224 14:53:24.779029 30059 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0224 14:53:24.779133 30059 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0224 14:53:24.783671 30059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 14:53:24.793656 30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
I0224 14:53:24.851176 30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0224 14:53:24.851279 30059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 14:53:24.871434 30059 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0224 14:53:24.871450 30059 docker.go:560] Images already preloaded, skipping extraction
I0224 14:53:24.871519 30059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 14:53:24.892340 30059 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0224 14:53:24.892364 30059 cache_images.go:84] Images are preloaded, skipping loading
I0224 14:53:24.892451 30059 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0224 14:53:24.918539 30059 cni.go:84] Creating CNI manager for ""
I0224 14:53:24.918558 30059 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0224 14:53:24.918574 30059 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0224 14:53:24.918596 30059 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-721000 NodeName:ingress-addon-legacy-721000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0224 14:53:24.918717 30059 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-721000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0224 14:53:24.918799 30059 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-721000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0224 14:53:24.918862 30059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0224 14:53:24.926933 30059 binaries.go:44] Found k8s binaries, skipping transfer
I0224 14:53:24.926997 30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0224 14:53:24.934470 30059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0224 14:53:24.947171 30059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0224 14:53:24.960428 30059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0224 14:53:24.973779 30059 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0224 14:53:24.977828 30059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0224 14:53:24.987744 30059 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000 for IP: 192.168.49.2
I0224 14:53:24.987764 30059 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:24.987931 30059 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
I0224 14:53:24.987998 30059 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
I0224 14:53:24.988041 30059 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key
I0224 14:53:24.988053 30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt with IP's: []
I0224 14:53:25.082625 30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt ...
I0224 14:53:25.082638 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt: {Name:mk97312d2d5782f42d613977a91abf12f03f9ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.083020 30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key ...
I0224 14:53:25.083031 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key: {Name:mkaa96049743a9a4c17b08c87d839ddbfddefd1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.083269 30059 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2
I0224 14:53:25.083286 30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0224 14:53:25.148858 30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 ...
I0224 14:53:25.148866 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2: {Name:mk73b3809824182651a8eadb9727dd5e66ad90f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.149094 30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2 ...
I0224 14:53:25.149102 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2: {Name:mkbb90123778aa98b26a330275094dc8b741bcc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.149288 30059 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt
I0224 14:53:25.149467 30059 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key
I0224 14:53:25.149619 30059 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key
I0224 14:53:25.149633 30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt with IP's: []
I0224 14:53:25.498835 30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt ...
I0224 14:53:25.498851 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt: {Name:mkdcda852c2995bc66118519dd0dcc2ab740c576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.499190 30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key ...
I0224 14:53:25.499199 30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key: {Name:mk252af65af1521443d09681340cbb1597b80fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0224 14:53:25.499376 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0224 14:53:25.499416 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0224 14:53:25.499438 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0224 14:53:25.499458 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0224 14:53:25.499481 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0224 14:53:25.499501 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0224 14:53:25.499519 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0224 14:53:25.499538 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0224 14:53:25.499639 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
W0224 14:53:25.499689 30059 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
I0224 14:53:25.499701 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
I0224 14:53:25.499742 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
I0224 14:53:25.499779 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
I0224 14:53:25.499815 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
I0224 14:53:25.499891 30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
I0224 14:53:25.499921 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0224 14:53:25.499940 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
I0224 14:53:25.499958 30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
I0224 14:53:25.500467 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0224 14:53:25.519284 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0224 14:53:25.536452 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0224 14:53:25.553951 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0224 14:53:25.571738 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 14:53:25.589206 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0224 14:53:25.606794 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0224 14:53:25.624109 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0224 14:53:25.641462 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0224 14:53:25.659127 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
I0224 14:53:25.676559 30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
I0224 14:53:25.693905 30059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0224 14:53:25.707358 30059 ssh_runner.go:195] Run: openssl version
I0224 14:53:25.712867 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
I0224 14:53:25.720767 30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
I0224 14:53:25.724669 30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
I0224 14:53:25.724720 30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
I0224 14:53:25.730114 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
I0224 14:53:25.738468 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
I0224 14:53:25.746600 30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
I0224 14:53:25.750519 30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
I0224 14:53:25.750565 30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
I0224 14:53:25.756244 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
I0224 14:53:25.764451 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 14:53:25.772474 30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0224 14:53:25.776456 30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
I0224 14:53:25.776500 30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 14:53:25.782044 30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 14:53:25.790125 30059 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0224 14:53:25.790259 30059 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0224 14:53:25.809486 30059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0224 14:53:25.817271 30059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0224 14:53:25.824784 30059 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0224 14:53:25.824888 30059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0224 14:53:25.832950 30059 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0224 14:53:25.832980 30059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0224 14:53:25.880682 30059 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0224 14:53:25.880750 30059 kubeadm.go:322] [preflight] Running pre-flight checks
I0224 14:53:26.050215 30059 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0224 14:53:26.050293 30059 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0224 14:53:26.050379 30059 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0224 14:53:26.204451 30059 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0224 14:53:26.204963 30059 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0224 14:53:26.205004 30059 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0224 14:53:26.282649 30059 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0224 14:53:26.304336 30059 out.go:204] - Generating certificates and keys ...
I0224 14:53:26.304439 30059 kubeadm.go:322] [certs] Using existing ca certificate authority
I0224 14:53:26.304521 30059 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0224 14:53:26.610536 30059 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0224 14:53:26.870683 30059 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0224 14:53:27.023222 30059 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0224 14:53:27.093151 30059 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0224 14:53:27.473679 30059 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0224 14:53:27.473831 30059 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0224 14:53:27.569128 30059 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0224 14:53:27.569260 30059 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0224 14:53:27.645072 30059 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0224 14:53:27.850789 30059 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0224 14:53:28.031182 30059 kubeadm.go:322] [certs] Generating "sa" key and public key
I0224 14:53:28.031321 30059 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0224 14:53:28.240457 30059 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0224 14:53:28.307493 30059 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0224 14:53:28.591814 30059 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0224 14:53:28.804834 30059 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0224 14:53:28.805827 30059 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0224 14:53:28.826283 30059 out.go:204] - Booting up control plane ...
I0224 14:53:28.826465 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0224 14:53:28.826624 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0224 14:53:28.826738 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0224 14:53:28.826867 30059 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0224 14:53:28.827249 30059 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0224 14:54:08.816546 30059 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0224 14:54:08.817953 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:54:08.818169 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:54:13.819939 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:54:13.820193 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:54:23.822140 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:54:23.822395 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:54:43.824430 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:54:43.824677 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:55:23.827417 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:55:23.827647 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:55:23.827661 30059 kubeadm.go:322]
I0224 14:55:23.827702 30059 kubeadm.go:322] Unfortunately, an error has occurred:
I0224 14:55:23.827796 30059 kubeadm.go:322] timed out waiting for the condition
I0224 14:55:23.827815 30059 kubeadm.go:322]
I0224 14:55:23.827854 30059 kubeadm.go:322] This error is likely caused by:
I0224 14:55:23.827926 30059 kubeadm.go:322] - The kubelet is not running
I0224 14:55:23.828082 30059 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0224 14:55:23.828098 30059 kubeadm.go:322]
I0224 14:55:23.828227 30059 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0224 14:55:23.828291 30059 kubeadm.go:322] - 'systemctl status kubelet'
I0224 14:55:23.828330 30059 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0224 14:55:23.828336 30059 kubeadm.go:322]
I0224 14:55:23.828460 30059 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0224 14:55:23.828581 30059 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0224 14:55:23.828595 30059 kubeadm.go:322]
I0224 14:55:23.828690 30059 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0224 14:55:23.828745 30059 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0224 14:55:23.828825 30059 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0224 14:55:23.828885 30059 kubeadm.go:322] - 'docker logs CONTAINERID'
I0224 14:55:23.828900 30059 kubeadm.go:322]
I0224 14:55:23.831155 30059 kubeadm.go:322] W0224 22:53:25.880120 1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0224 14:55:23.831320 30059 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0224 14:55:23.831400 30059 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0224 14:55:23.831506 30059 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0224 14:55:23.831588 30059 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0224 14:55:23.831700 30059 kubeadm.go:322] W0224 22:53:28.810292 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0224 14:55:23.831803 30059 kubeadm.go:322] W0224 22:53:28.811314 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0224 14:55:23.831877 30059 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0224 14:55:23.831954 30059 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0224 14:55:23.832152 30059 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:53:25.880120 1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:53:28.810292 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:53:28.811314 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:53:25.880120 1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:53:28.810292 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:53:28.811314 1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0224 14:55:23.832189 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0224 14:55:24.242180 30059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0224 14:55:24.251976 30059 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0224 14:55:24.252030 30059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0224 14:55:24.259547 30059 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0224 14:55:24.259570 30059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0224 14:55:24.307087 30059 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0224 14:55:24.307145 30059 kubeadm.go:322] [preflight] Running pre-flight checks
I0224 14:55:24.472699 30059 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0224 14:55:24.472796 30059 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0224 14:55:24.472891 30059 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0224 14:55:24.627183 30059 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0224 14:55:24.627591 30059 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0224 14:55:24.627639 30059 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0224 14:55:24.699699 30059 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0224 14:55:24.721311 30059 out.go:204] - Generating certificates and keys ...
I0224 14:55:24.721415 30059 kubeadm.go:322] [certs] Using existing ca certificate authority
I0224 14:55:24.721474 30059 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0224 14:55:24.721543 30059 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0224 14:55:24.721600 30059 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0224 14:55:24.721657 30059 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0224 14:55:24.721706 30059 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0224 14:55:24.721825 30059 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0224 14:55:24.721955 30059 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0224 14:55:24.722050 30059 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0224 14:55:24.722131 30059 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0224 14:55:24.722220 30059 kubeadm.go:322] [certs] Using the existing "sa" key
I0224 14:55:24.722313 30059 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0224 14:55:24.803926 30059 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0224 14:55:24.928124 30059 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0224 14:55:25.043520 30059 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0224 14:55:25.193611 30059 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0224 14:55:25.194064 30059 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0224 14:55:25.215696 30059 out.go:204] - Booting up control plane ...
I0224 14:55:25.215944 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0224 14:55:25.216099 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0224 14:55:25.216207 30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0224 14:55:25.216343 30059 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0224 14:55:25.216596 30059 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0224 14:56:05.203591 30059 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0224 14:56:05.204505 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:56:05.204751 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:56:10.205510 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:56:10.205741 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:56:20.207797 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:56:20.208048 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:56:40.209838 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:56:40.210072 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:57:20.212217 30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0224 14:57:20.212451 30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0224 14:57:20.212469 30059 kubeadm.go:322]
I0224 14:57:20.212537 30059 kubeadm.go:322] Unfortunately, an error has occurred:
I0224 14:57:20.212582 30059 kubeadm.go:322] timed out waiting for the condition
I0224 14:57:20.212588 30059 kubeadm.go:322]
I0224 14:57:20.212622 30059 kubeadm.go:322] This error is likely caused by:
I0224 14:57:20.212671 30059 kubeadm.go:322] - The kubelet is not running
I0224 14:57:20.212800 30059 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0224 14:57:20.212810 30059 kubeadm.go:322]
I0224 14:57:20.212974 30059 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0224 14:57:20.213026 30059 kubeadm.go:322] - 'systemctl status kubelet'
I0224 14:57:20.213074 30059 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0224 14:57:20.213086 30059 kubeadm.go:322]
I0224 14:57:20.213199 30059 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0224 14:57:20.213311 30059 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0224 14:57:20.213328 30059 kubeadm.go:322]
I0224 14:57:20.213441 30059 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0224 14:57:20.213499 30059 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0224 14:57:20.213587 30059 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0224 14:57:20.213611 30059 kubeadm.go:322] - 'docker logs CONTAINERID'
I0224 14:57:20.213616 30059 kubeadm.go:322]
I0224 14:57:20.216377 30059 kubeadm.go:322] W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0224 14:57:20.216521 30059 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0224 14:57:20.216579 30059 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0224 14:57:20.216676 30059 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0224 14:57:20.216753 30059 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0224 14:57:20.216858 30059 kubeadm.go:322] W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0224 14:57:20.216948 30059 kubeadm.go:322] W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0224 14:57:20.217015 30059 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0224 14:57:20.217072 30059 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0224 14:57:20.217108 30059 kubeadm.go:403] StartCluster complete in 3m54.422743929s
I0224 14:57:20.217201 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0224 14:57:20.235671 30059 logs.go:277] 0 containers: []
W0224 14:57:20.235683 30059 logs.go:279] No container was found matching "kube-apiserver"
I0224 14:57:20.235750 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0224 14:57:20.255341 30059 logs.go:277] 0 containers: []
W0224 14:57:20.255354 30059 logs.go:279] No container was found matching "etcd"
I0224 14:57:20.255423 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0224 14:57:20.274592 30059 logs.go:277] 0 containers: []
W0224 14:57:20.274604 30059 logs.go:279] No container was found matching "coredns"
I0224 14:57:20.274672 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0224 14:57:20.293808 30059 logs.go:277] 0 containers: []
W0224 14:57:20.293821 30059 logs.go:279] No container was found matching "kube-scheduler"
I0224 14:57:20.293900 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0224 14:57:20.312157 30059 logs.go:277] 0 containers: []
W0224 14:57:20.312174 30059 logs.go:279] No container was found matching "kube-proxy"
I0224 14:57:20.312240 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0224 14:57:20.332510 30059 logs.go:277] 0 containers: []
W0224 14:57:20.332523 30059 logs.go:279] No container was found matching "kube-controller-manager"
I0224 14:57:20.332599 30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0224 14:57:20.351554 30059 logs.go:277] 0 containers: []
W0224 14:57:20.351568 30059 logs.go:279] No container was found matching "kindnet"
I0224 14:57:20.351575 30059 logs.go:123] Gathering logs for container status ...
I0224 14:57:20.351583 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0224 14:57:22.400248 30059 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048615565s)
I0224 14:57:22.400410 30059 logs.go:123] Gathering logs for kubelet ...
I0224 14:57:22.400422 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0224 14:57:22.440089 30059 logs.go:123] Gathering logs for dmesg ...
I0224 14:57:22.440103 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0224 14:57:22.454798 30059 logs.go:123] Gathering logs for describe nodes ...
I0224 14:57:22.454812 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0224 14:57:22.509396 30059 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0224 14:57:22.509407 30059 logs.go:123] Gathering logs for Docker ...
I0224 14:57:22.509414 30059 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W0224 14:57:22.534365 30059 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0224 14:57:22.534386 30059 out.go:239] *
*
W0224 14:57:22.534506 30059 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0224 14:57:22.534519 30059 out.go:239] *
*
W0224 14:57:22.535130 30059 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 14:57:22.621779 30059 out.go:177]
W0224 14:57:22.664182 30059 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0224 22:55:24.305833 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0224 22:55:25.198301 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0224 22:55:25.198995 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0224 14:57:22.664300 30059 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0224 14:57:22.664367 30059 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0224 14:57:22.685838 30059 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (264.64s)