=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0223 12:43:53.665808 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:09.887354 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:37.574389 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:46.556713 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.562801 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.573843 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.596021 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.637779 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.717862 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.878141 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:47.199518 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:47.840276 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:49.121819 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:51.683583 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:56.804484 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:47:07.046983 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:47:27.529570 2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.830816477s)
-- stdout --
* [ingress-addon-legacy-611000] minikube v1.29.0 on Darwin 13.2
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-611000 in cluster ingress-addon-legacy-611000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0223 12:43:42.522214 5086 out.go:296] Setting OutFile to fd 1 ...
I0223 12:43:42.522368 5086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 12:43:42.522374 5086 out.go:309] Setting ErrFile to fd 2...
I0223 12:43:42.522378 5086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 12:43:42.522481 5086 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
I0223 12:43:42.523813 5086 out.go:303] Setting JSON to false
I0223 12:43:42.542360 5086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":797,"bootTime":1677184225,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0223 12:43:42.542449 5086 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0223 12:43:42.564178 5086 out.go:177] * [ingress-addon-legacy-611000] minikube v1.29.0 on Darwin 13.2
I0223 12:43:42.606146 5086 out.go:177] - MINIKUBE_LOCATION=15909
I0223 12:43:42.606146 5086 notify.go:220] Checking for updates...
I0223 12:43:42.628268 5086 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
I0223 12:43:42.650099 5086 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0223 12:43:42.671093 5086 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 12:43:42.692284 5086 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
I0223 12:43:42.714100 5086 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 12:43:42.735262 5086 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 12:43:42.795432 5086 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0223 12:43:42.795548 5086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 12:43:42.934839 5086 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:43:42.844343792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 12:43:42.956666 5086 out.go:177] * Using the docker driver based on user configuration
I0223 12:43:42.978422 5086 start.go:296] selected driver: docker
I0223 12:43:42.978449 5086 start.go:857] validating driver "docker" against <nil>
I0223 12:43:42.978472 5086 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 12:43:42.982419 5086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 12:43:43.123412 5086 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:43:43.032036508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 12:43:43.123524 5086 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0223 12:43:43.123717 5086 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 12:43:43.145335 5086 out.go:177] * Using Docker Desktop driver with root privileges
I0223 12:43:43.166927 5086 cni.go:84] Creating CNI manager for ""
I0223 12:43:43.166958 5086 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 12:43:43.166969 5086 start_flags.go:319] config:
{Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 12:43:43.188113 5086 out.go:177] * Starting control plane node ingress-addon-legacy-611000 in cluster ingress-addon-legacy-611000
I0223 12:43:43.231210 5086 cache.go:120] Beginning downloading kic base image for docker with docker
I0223 12:43:43.252978 5086 out.go:177] * Pulling base image ...
I0223 12:43:43.295154 5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 12:43:43.295215 5086 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
I0223 12:43:43.352795 5086 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
I0223 12:43:43.352820 5086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
I0223 12:43:43.399226 5086 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 12:43:43.399273 5086 cache.go:57] Caching tarball of preloaded images
I0223 12:43:43.399611 5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 12:43:43.421259 5086 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0223 12:43:43.463042 5086 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 12:43:43.688982 5086 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 12:43:54.465288 5086 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 12:43:54.465448 5086 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 12:43:55.087526 5086 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0223 12:43:55.087757 5086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json ...
I0223 12:43:55.087785 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json: {Name:mk1e549380ea62e21517a4018d2dfab72fa04b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:43:55.088060 5086 cache.go:193] Successfully downloaded all kic artifacts
I0223 12:43:55.088088 5086 start.go:364] acquiring machines lock for ingress-addon-legacy-611000: {Name:mk9aab0310f9468d3dad74767a4969e82ab28a47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 12:43:55.088178 5086 start.go:368] acquired machines lock for "ingress-addon-legacy-611000" in 83.118µs
I0223 12:43:55.088205 5086 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 12:43:55.088251 5086 start.go:125] createHost starting for "" (driver="docker")
I0223 12:43:55.150667 5086 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0223 12:43:55.151015 5086 start.go:159] libmachine.API.Create for "ingress-addon-legacy-611000" (driver="docker")
I0223 12:43:55.151059 5086 client.go:168] LocalClient.Create starting
I0223 12:43:55.151258 5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
I0223 12:43:55.151340 5086 main.go:141] libmachine: Decoding PEM data...
I0223 12:43:55.151373 5086 main.go:141] libmachine: Parsing certificate...
I0223 12:43:55.151482 5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
I0223 12:43:55.151545 5086 main.go:141] libmachine: Decoding PEM data...
I0223 12:43:55.151562 5086 main.go:141] libmachine: Parsing certificate...
I0223 12:43:55.152415 5086 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-611000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0223 12:43:55.207714 5086 cli_runner.go:211] docker network inspect ingress-addon-legacy-611000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0223 12:43:55.207824 5086 network_create.go:281] running [docker network inspect ingress-addon-legacy-611000] to gather additional debugging logs...
I0223 12:43:55.207841 5086 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-611000
W0223 12:43:55.261808 5086 cli_runner.go:211] docker network inspect ingress-addon-legacy-611000 returned with exit code 1
I0223 12:43:55.261834 5086 network_create.go:284] error running [docker network inspect ingress-addon-legacy-611000]: docker network inspect ingress-addon-legacy-611000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-611000
I0223 12:43:55.261852 5086 network_create.go:286] output of [docker network inspect ingress-addon-legacy-611000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-611000
** /stderr **
I0223 12:43:55.261948 5086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0223 12:43:55.317120 5086 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005483b0}
I0223 12:43:55.317153 5086 network_create.go:123] attempt to create docker network ingress-addon-legacy-611000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0223 12:43:55.317218 5086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 ingress-addon-legacy-611000
I0223 12:43:55.402614 5086 network_create.go:107] docker network ingress-addon-legacy-611000 192.168.49.0/24 created
I0223 12:43:55.402663 5086 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-611000" container
I0223 12:43:55.402793 5086 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0223 12:43:55.458104 5086 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-611000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --label created_by.minikube.sigs.k8s.io=true
I0223 12:43:55.511713 5086 oci.go:103] Successfully created a docker volume ingress-addon-legacy-611000
I0223 12:43:55.511850 5086 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-611000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --entrypoint /usr/bin/test -v ingress-addon-legacy-611000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
I0223 12:43:55.954535 5086 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-611000
I0223 12:43:55.954594 5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 12:43:55.954609 5086 kic.go:190] Starting extracting preloaded images to volume ...
I0223 12:43:55.954736 5086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-611000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
I0223 12:44:02.145695 5086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-611000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.190792613s)
I0223 12:44:02.145729 5086 kic.go:199] duration metric: took 6.191039 seconds to extract preloaded images to volume
I0223 12:44:02.145844 5086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0223 12:44:02.287033 5086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-611000 --name ingress-addon-legacy-611000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --network ingress-addon-legacy-611000 --ip 192.168.49.2 --volume ingress-addon-legacy-611000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
I0223 12:44:02.631547 5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Running}}
I0223 12:44:02.689138 5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
I0223 12:44:02.748529 5086 cli_runner.go:164] Run: docker exec ingress-addon-legacy-611000 stat /var/lib/dpkg/alternatives/iptables
I0223 12:44:02.867541 5086 oci.go:144] the created container "ingress-addon-legacy-611000" has a running status.
I0223 12:44:02.867575 5086 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa...
I0223 12:44:03.008251 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0223 12:44:03.008320 5086 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0223 12:44:03.108984 5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
I0223 12:44:03.164534 5086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0223 12:44:03.164554 5086 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-611000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0223 12:44:03.265498 5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
I0223 12:44:03.320907 5086 machine.go:88] provisioning docker machine ...
I0223 12:44:03.320948 5086 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-611000"
I0223 12:44:03.321062 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:03.377375 5086 main.go:141] libmachine: Using SSH client type: native
I0223 12:44:03.377761 5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50516 <nil> <nil>}
I0223 12:44:03.377775 5086 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-611000 && echo "ingress-addon-legacy-611000" | sudo tee /etc/hostname
I0223 12:44:03.519610 5086 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-611000
I0223 12:44:03.519681 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:03.576972 5086 main.go:141] libmachine: Using SSH client type: native
I0223 12:44:03.577316 5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50516 <nil> <nil>}
I0223 12:44:03.577336 5086 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-611000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-611000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-611000' | sudo tee -a /etc/hosts;
fi
fi
I0223 12:44:03.711856 5086 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 12:44:03.711882 5086 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
I0223 12:44:03.711905 5086 ubuntu.go:177] setting up certificates
I0223 12:44:03.711912 5086 provision.go:83] configureAuth start
I0223 12:44:03.711997 5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
I0223 12:44:03.767981 5086 provision.go:138] copyHostCerts
I0223 12:44:03.768027 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
I0223 12:44:03.768088 5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
I0223 12:44:03.768095 5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
I0223 12:44:03.768218 5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
I0223 12:44:03.768382 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
I0223 12:44:03.768420 5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
I0223 12:44:03.768425 5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
I0223 12:44:03.768499 5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
I0223 12:44:03.768611 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
I0223 12:44:03.768650 5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
I0223 12:44:03.768655 5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
I0223 12:44:03.768720 5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
I0223 12:44:03.768831 5086 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-611000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-611000]
I0223 12:44:03.859369 5086 provision.go:172] copyRemoteCerts
I0223 12:44:03.859423 5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 12:44:03.859470 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:03.915266 5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
I0223 12:44:04.009994 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 12:44:04.010087 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 12:44:04.026966 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 12:44:04.027059 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0223 12:44:04.043801 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 12:44:04.043879 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 12:44:04.060474 5086 provision.go:86] duration metric: configureAuth took 348.538018ms
I0223 12:44:04.060493 5086 ubuntu.go:193] setting minikube options for container-runtime
I0223 12:44:04.060655 5086 config.go:182] Loaded profile config "ingress-addon-legacy-611000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0223 12:44:04.060718 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:04.117648 5086 main.go:141] libmachine: Using SSH client type: native
I0223 12:44:04.118007 5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50516 <nil> <nil>}
I0223 12:44:04.118024 5086 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 12:44:04.251967 5086 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0223 12:44:04.251986 5086 ubuntu.go:71] root file system type: overlay
I0223 12:44:04.252132 5086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 12:44:04.252227 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:04.308898 5086 main.go:141] libmachine: Using SSH client type: native
I0223 12:44:04.309276 5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50516 <nil> <nil>}
I0223 12:44:04.309325 5086 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 12:44:04.451376 5086 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 12:44:04.451492 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:04.508200 5086 main.go:141] libmachine: Using SSH client type: native
I0223 12:44:04.508568 5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50516 <nil> <nil>}
I0223 12:44:04.508581 5086 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 12:44:05.121534 5086 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-23 20:44:04.449655934 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0223 12:44:05.121560 5086 machine.go:91] provisioned docker machine in 1.800608719s
I0223 12:44:05.121566 5086 client.go:171] LocalClient.Create took 9.97037326s
I0223 12:44:05.121592 5086 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-611000" took 9.970451342s
I0223 12:44:05.121602 5086 start.go:300] post-start starting for "ingress-addon-legacy-611000" (driver="docker")
I0223 12:44:05.121607 5086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 12:44:05.121689 5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 12:44:05.121742 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:05.182175 5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
I0223 12:44:05.276293 5086 ssh_runner.go:195] Run: cat /etc/os-release
I0223 12:44:05.279915 5086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0223 12:44:05.279934 5086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0223 12:44:05.279946 5086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0223 12:44:05.279951 5086 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0223 12:44:05.279962 5086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
I0223 12:44:05.280063 5086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
I0223 12:44:05.280242 5086 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
I0223 12:44:05.280249 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
I0223 12:44:05.280458 5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 12:44:05.287454 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
I0223 12:44:05.304375 5086 start.go:303] post-start completed in 182.761912ms
I0223 12:44:05.304884 5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
I0223 12:44:05.361924 5086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json ...
I0223 12:44:05.362347 5086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0223 12:44:05.362413 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:05.418158 5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
I0223 12:44:05.508480 5086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0223 12:44:05.513207 5086 start.go:128] duration metric: createHost completed in 10.424816003s
I0223 12:44:05.513223 5086 start.go:83] releasing machines lock for "ingress-addon-legacy-611000", held for 10.424903748s
I0223 12:44:05.513306 5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
I0223 12:44:05.569508 5086 ssh_runner.go:195] Run: cat /version.json
I0223 12:44:05.569542 5086 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0223 12:44:05.569583 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:05.569610 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:05.628604 5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
I0223 12:44:05.628722 5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
I0223 12:44:05.969791 5086 ssh_runner.go:195] Run: systemctl --version
I0223 12:44:05.974264 5086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 12:44:05.979084 5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0223 12:44:05.999541 5086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0223 12:44:05.999624 5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0223 12:44:06.014204 5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0223 12:44:06.021974 5086 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0223 12:44:06.021989 5086 start.go:485] detecting cgroup driver to use...
I0223 12:44:06.022001 5086 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 12:44:06.022082 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 12:44:06.035214 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0223 12:44:06.043649 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 12:44:06.051881 5086 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 12:44:06.051938 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 12:44:06.060500 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 12:44:06.068758 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 12:44:06.077080 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 12:44:06.085403 5086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 12:44:06.093047 5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 12:44:06.101304 5086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 12:44:06.108302 5086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 12:44:06.115251 5086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 12:44:06.180122 5086 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 12:44:06.252107 5086 start.go:485] detecting cgroup driver to use...
I0223 12:44:06.252127 5086 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 12:44:06.252200 5086 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 12:44:06.262397 5086 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0223 12:44:06.262477 5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 12:44:06.272560 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0223 12:44:06.286198 5086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 12:44:06.399038 5086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 12:44:06.479151 5086 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 12:44:06.479169 5086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 12:44:06.492009 5086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 12:44:06.587108 5086 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 12:44:06.795296 5086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 12:44:06.819580 5086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 12:44:06.886115 5086 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0223 12:44:06.886342 5086 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-611000 dig +short host.docker.internal
I0223 12:44:07.017178 5086 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0223 12:44:07.017290 5086 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0223 12:44:07.021690 5086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 12:44:07.031420 5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
I0223 12:44:07.087420 5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 12:44:07.087511 5086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 12:44:07.106870 5086 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 12:44:07.106888 5086 docker.go:560] Images already preloaded, skipping extraction
I0223 12:44:07.106982 5086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 12:44:07.126951 5086 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 12:44:07.126967 5086 cache_images.go:84] Images are preloaded, skipping loading
I0223 12:44:07.127051 5086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 12:44:07.152785 5086 cni.go:84] Creating CNI manager for ""
I0223 12:44:07.152804 5086 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 12:44:07.152817 5086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 12:44:07.152833 5086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-611000 NodeName:ingress-addon-legacy-611000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 12:44:07.152946 5086 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-611000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 12:44:07.153032 5086 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-611000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 12:44:07.153102 5086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0223 12:44:07.160746 5086 binaries.go:44] Found k8s binaries, skipping transfer
I0223 12:44:07.160814 5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 12:44:07.168107 5086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0223 12:44:07.180505 5086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0223 12:44:07.193037 5086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0223 12:44:07.205539 5086 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0223 12:44:07.209541 5086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 12:44:07.219043 5086 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000 for IP: 192.168.49.2
I0223 12:44:07.219061 5086 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.219243 5086 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
I0223 12:44:07.219306 5086 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
I0223 12:44:07.219356 5086 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key
I0223 12:44:07.219369 5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt with IP's: []
I0223 12:44:07.337418 5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt ...
I0223 12:44:07.337431 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt: {Name:mk129ec7f5a94c39da390d3fda302771208386c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.337759 5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key ...
I0223 12:44:07.337775 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key: {Name:mk57809e5122d9b38d6f444bd5a8f30310a55151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.338002 5086 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2
I0223 12:44:07.338018 5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0223 12:44:07.440031 5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 ...
I0223 12:44:07.440040 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2: {Name:mkd9472f270de56d53ed5155231c608aab76cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.440273 5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2 ...
I0223 12:44:07.440281 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2: {Name:mk633821d5c92ba97ceefebba282c13eb5e823a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.440476 5086 certs.go:333] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt
I0223 12:44:07.440646 5086 certs.go:337] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key
I0223 12:44:07.440815 5086 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key
I0223 12:44:07.440833 5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt with IP's: []
I0223 12:44:07.738249 5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt ...
I0223 12:44:07.738263 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt: {Name:mk03bb3785ff7aa6ffb4e0b3c55bf5bd5a5b9025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.738575 5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key ...
I0223 12:44:07.738583 5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key: {Name:mkfed3d8981514ca42c94d5ecf0c8cbc980b582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 12:44:07.738799 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 12:44:07.738835 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 12:44:07.738858 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 12:44:07.738880 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 12:44:07.738900 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 12:44:07.738922 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 12:44:07.738942 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 12:44:07.738965 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 12:44:07.739062 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
W0223 12:44:07.739116 5086 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
I0223 12:44:07.739129 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
I0223 12:44:07.739162 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
I0223 12:44:07.739193 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
I0223 12:44:07.739226 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
I0223 12:44:07.739302 5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
I0223 12:44:07.739336 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
I0223 12:44:07.739365 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 12:44:07.739385 5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
I0223 12:44:07.739917 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 12:44:07.757904 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0223 12:44:07.774576 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 12:44:07.791516 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0223 12:44:07.808806 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 12:44:07.825544 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0223 12:44:07.842556 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 12:44:07.859490 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0223 12:44:07.876434 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
I0223 12:44:07.893259 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 12:44:07.910033 5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
I0223 12:44:07.926923 5086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 12:44:07.939439 5086 ssh_runner.go:195] Run: openssl version
I0223 12:44:07.944783 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 12:44:07.952788 5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 12:44:07.956608 5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
I0223 12:44:07.956656 5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 12:44:07.962064 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 12:44:07.970142 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
I0223 12:44:07.978267 5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
I0223 12:44:07.982421 5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
I0223 12:44:07.982465 5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
I0223 12:44:07.987941 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
I0223 12:44:07.995678 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
I0223 12:44:08.003495 5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
I0223 12:44:08.007336 5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
I0223 12:44:08.007382 5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
I0223 12:44:08.012766 5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
I0223 12:44:08.020626 5086 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 12:44:08.020742 5086 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 12:44:08.040224 5086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 12:44:08.047941 5086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 12:44:08.055233 5086 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 12:44:08.055288 5086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 12:44:08.062521 5086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 12:44:08.062546 5086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 12:44:08.109644 5086 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 12:44:08.109695 5086 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 12:44:08.271425 5086 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 12:44:08.271524 5086 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 12:44:08.271606 5086 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 12:44:08.418756 5086 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 12:44:08.419275 5086 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 12:44:08.419324 5086 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 12:44:08.490033 5086 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 12:44:08.511717 5086 out.go:204] - Generating certificates and keys ...
I0223 12:44:08.511843 5086 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 12:44:08.511918 5086 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 12:44:08.630914 5086 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0223 12:44:08.843859 5086 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0223 12:44:09.045387 5086 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0223 12:44:09.160572 5086 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0223 12:44:09.271313 5086 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0223 12:44:09.271438 5086 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 12:44:09.473943 5086 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0223 12:44:09.474253 5086 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 12:44:09.630096 5086 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0223 12:44:09.702706 5086 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0223 12:44:09.764808 5086 kubeadm.go:322] [certs] Generating "sa" key and public key
I0223 12:44:09.764871 5086 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 12:44:09.920126 5086 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 12:44:10.066583 5086 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 12:44:10.161021 5086 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 12:44:10.359555 5086 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 12:44:10.360038 5086 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 12:44:10.381626 5086 out.go:204] - Booting up control plane ...
I0223 12:44:10.381846 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 12:44:10.382033 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 12:44:10.382152 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 12:44:10.382276 5086 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 12:44:10.382568 5086 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 12:44:50.368675 5086 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 12:44:50.369153 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:44:50.369317 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:44:55.371225 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:44:55.371564 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:45:05.372086 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:45:05.372257 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:45:25.374406 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:45:25.374659 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:46:05.439875 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:46:05.440060 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:46:05.440076 5086 kubeadm.go:322]
I0223 12:46:05.440112 5086 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 12:46:05.440152 5086 kubeadm.go:322] timed out waiting for the condition
I0223 12:46:05.440162 5086 kubeadm.go:322]
I0223 12:46:05.440190 5086 kubeadm.go:322] This error is likely caused by:
I0223 12:46:05.440231 5086 kubeadm.go:322] - The kubelet is not running
I0223 12:46:05.440311 5086 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 12:46:05.440317 5086 kubeadm.go:322]
I0223 12:46:05.440403 5086 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 12:46:05.440444 5086 kubeadm.go:322] - 'systemctl status kubelet'
I0223 12:46:05.440468 5086 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 12:46:05.440475 5086 kubeadm.go:322]
I0223 12:46:05.440556 5086 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 12:46:05.440616 5086 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 12:46:05.440622 5086 kubeadm.go:322]
I0223 12:46:05.440696 5086 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 12:46:05.440760 5086 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 12:46:05.440836 5086 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 12:46:05.440863 5086 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 12:46:05.440873 5086 kubeadm.go:322]
I0223 12:46:05.443289 5086 kubeadm.go:322] W0223 20:44:08.108759 1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 12:46:05.443443 5086 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 12:46:05.443512 5086 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 12:46:05.443627 5086 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 12:46:05.443725 5086 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 12:46:05.443828 5086 kubeadm.go:322] W0223 20:44:10.363292 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 12:46:05.443929 5086 kubeadm.go:322] W0223 20:44:10.363983 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 12:46:05.443990 5086 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 12:46:05.444054 5086 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0223 12:46:05.444253 5086 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:44:08.108759 1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:44:10.363292 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:44:10.363983 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:44:08.108759 1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:44:10.363292 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:44:10.363983 1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0223 12:46:05.444289 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0223 12:46:05.854826 5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 12:46:05.866325 5086 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 12:46:05.866385 5086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 12:46:05.873829 5086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 12:46:05.873852 5086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 12:46:05.920870 5086 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 12:46:05.920917 5086 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 12:46:06.082677 5086 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 12:46:06.082777 5086 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 12:46:06.082859 5086 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 12:46:06.235168 5086 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 12:46:06.235638 5086 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 12:46:06.235672 5086 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 12:46:06.312527 5086 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 12:46:06.333704 5086 out.go:204] - Generating certificates and keys ...
I0223 12:46:06.333803 5086 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 12:46:06.333874 5086 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 12:46:06.333949 5086 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 12:46:06.333997 5086 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0223 12:46:06.334048 5086 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0223 12:46:06.334131 5086 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0223 12:46:06.334202 5086 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0223 12:46:06.334257 5086 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0223 12:46:06.334311 5086 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 12:46:06.334408 5086 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 12:46:06.334444 5086 kubeadm.go:322] [certs] Using the existing "sa" key
I0223 12:46:06.334496 5086 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 12:46:06.398000 5086 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 12:46:06.496647 5086 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 12:46:06.676754 5086 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 12:46:06.831371 5086 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 12:46:06.831832 5086 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 12:46:06.853199 5086 out.go:204] - Booting up control plane ...
I0223 12:46:06.853333 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 12:46:06.853488 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 12:46:06.853599 5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 12:46:06.853701 5086 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 12:46:06.853966 5086 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 12:46:46.842224 5086 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 12:46:46.842905 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:46:46.843206 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:46:51.844556 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:46:51.844817 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:47:01.846538 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:47:01.846770 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:47:21.848239 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:47:21.848446 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:48:01.850694 5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 12:48:01.850926 5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 12:48:01.850937 5086 kubeadm.go:322]
I0223 12:48:01.851008 5086 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 12:48:01.851061 5086 kubeadm.go:322] timed out waiting for the condition
I0223 12:48:01.851070 5086 kubeadm.go:322]
I0223 12:48:01.851115 5086 kubeadm.go:322] This error is likely caused by:
I0223 12:48:01.851202 5086 kubeadm.go:322] - The kubelet is not running
I0223 12:48:01.851415 5086 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 12:48:01.851426 5086 kubeadm.go:322]
I0223 12:48:01.851522 5086 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 12:48:01.851565 5086 kubeadm.go:322] - 'systemctl status kubelet'
I0223 12:48:01.851602 5086 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 12:48:01.851608 5086 kubeadm.go:322]
I0223 12:48:01.851690 5086 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 12:48:01.851762 5086 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 12:48:01.851768 5086 kubeadm.go:322]
I0223 12:48:01.851847 5086 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 12:48:01.851891 5086 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 12:48:01.851957 5086 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 12:48:01.851989 5086 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 12:48:01.852001 5086 kubeadm.go:322]
I0223 12:48:01.854422 5086 kubeadm.go:322] W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 12:48:01.854589 5086 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 12:48:01.854656 5086 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 12:48:01.854761 5086 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 12:48:01.854854 5086 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 12:48:01.854945 5086 kubeadm.go:322] W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 12:48:01.855043 5086 kubeadm.go:322] W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 12:48:01.855120 5086 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 12:48:01.855189 5086 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0223 12:48:01.855216 5086 kubeadm.go:403] StartCluster complete in 3m53.767030501s
I0223 12:48:01.855307 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0223 12:48:01.875352 5086 logs.go:277] 0 containers: []
W0223 12:48:01.875367 5086 logs.go:279] No container was found matching "kube-apiserver"
I0223 12:48:01.875437 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0223 12:48:01.895493 5086 logs.go:277] 0 containers: []
W0223 12:48:01.895506 5086 logs.go:279] No container was found matching "etcd"
I0223 12:48:01.895579 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0223 12:48:01.913839 5086 logs.go:277] 0 containers: []
W0223 12:48:01.913852 5086 logs.go:279] No container was found matching "coredns"
I0223 12:48:01.913925 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0223 12:48:01.932635 5086 logs.go:277] 0 containers: []
W0223 12:48:01.932649 5086 logs.go:279] No container was found matching "kube-scheduler"
I0223 12:48:01.932717 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0223 12:48:01.951841 5086 logs.go:277] 0 containers: []
W0223 12:48:01.951855 5086 logs.go:279] No container was found matching "kube-proxy"
I0223 12:48:01.951929 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0223 12:48:01.970647 5086 logs.go:277] 0 containers: []
W0223 12:48:01.970667 5086 logs.go:279] No container was found matching "kube-controller-manager"
I0223 12:48:01.970734 5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0223 12:48:01.989439 5086 logs.go:277] 0 containers: []
W0223 12:48:01.989452 5086 logs.go:279] No container was found matching "kindnet"
I0223 12:48:01.989459 5086 logs.go:123] Gathering logs for kubelet ...
I0223 12:48:01.989467 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0223 12:48:02.028851 5086 logs.go:123] Gathering logs for dmesg ...
I0223 12:48:02.028866 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0223 12:48:02.041107 5086 logs.go:123] Gathering logs for describe nodes ...
I0223 12:48:02.041119 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0223 12:48:02.094178 5086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0223 12:48:02.094189 5086 logs.go:123] Gathering logs for Docker ...
I0223 12:48:02.094196 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0223 12:48:02.118674 5086 logs.go:123] Gathering logs for container status ...
I0223 12:48:02.118687 5086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0223 12:48:04.165651 5086 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046914807s)
W0223 12:48:04.165774 5086 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 12:48:04.165789 5086 out.go:239] *
*
W0223 12:48:04.165921 5086 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 12:48:04.165934 5086 out.go:239] *
*
W0223 12:48:04.166556 5086 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 12:48:04.229090 5086 out.go:177]
W0223 12:48:04.292328 5086 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 20:46:05.920044 3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 20:46:06.836308 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 20:46:06.837138 3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 12:48:04.292460 5086 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0223 12:48:04.292543 5086 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0223 12:48:04.314109 5086 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.86s)