=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0223 14:09:23.924039 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:11:40.077777 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:12:06.016983 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.022744 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.033633 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.055823 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.096315 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.178483 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.340714 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.662893 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:07.304536 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:07.762856 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:12:08.586646 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:11.147078 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:16.267183 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:26.507449 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:46.988225 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:13:27.950069 15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m28.850623547s)
-- stdout --
* [ingress-addon-legacy-234000] minikube v1.29.0 on Darwin 13.2
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-234000 in cluster ingress-addon-legacy-234000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0223 14:09:00.086067 18216 out.go:296] Setting OutFile to fd 1 ...
I0223 14:09:00.086231 18216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 14:09:00.086236 18216 out.go:309] Setting ErrFile to fd 2...
I0223 14:09:00.086239 18216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0223 14:09:00.086349 18216 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
I0223 14:09:00.087710 18216 out.go:303] Setting JSON to false
I0223 14:09:00.106163 18216 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5915,"bootTime":1677184225,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0223 14:09:00.106252 18216 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0223 14:09:00.127714 18216 out.go:177] * [ingress-addon-legacy-234000] minikube v1.29.0 on Darwin 13.2
I0223 14:09:00.170108 18216 notify.go:220] Checking for updates...
I0223 14:09:00.191685 18216 out.go:177] - MINIKUBE_LOCATION=15909
I0223 14:09:00.213037 18216 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
I0223 14:09:00.234757 18216 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0223 14:09:00.255805 18216 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 14:09:00.277043 18216 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
I0223 14:09:00.298944 18216 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 14:09:00.321144 18216 driver.go:365] Setting default libvirt URI to qemu:///system
I0223 14:09:00.383102 18216 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0223 14:09:00.383247 18216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 14:09:00.524533 18216 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:09:00.432919381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 14:09:00.545949 18216 out.go:177] * Using the docker driver based on user configuration
I0223 14:09:00.567823 18216 start.go:296] selected driver: docker
I0223 14:09:00.567856 18216 start.go:857] validating driver "docker" against <nil>
I0223 14:09:00.567875 18216 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 14:09:00.571851 18216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 14:09:00.712356 18216 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:09:00.620687221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 14:09:00.712513 18216 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0223 14:09:00.712690 18216 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 14:09:00.734109 18216 out.go:177] * Using Docker Desktop driver with root privileges
I0223 14:09:00.756264 18216 cni.go:84] Creating CNI manager for ""
I0223 14:09:00.756302 18216 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 14:09:00.756318 18216 start_flags.go:319] config:
{Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 14:09:00.799882 18216 out.go:177] * Starting control plane node ingress-addon-legacy-234000 in cluster ingress-addon-legacy-234000
I0223 14:09:00.821220 18216 cache.go:120] Beginning downloading kic base image for docker with docker
I0223 14:09:00.843190 18216 out.go:177] * Pulling base image ...
I0223 14:09:00.865091 18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 14:09:00.865133 18216 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
I0223 14:09:00.920760 18216 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
I0223 14:09:00.920783 18216 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
I0223 14:09:00.977669 18216 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 14:09:00.977710 18216 cache.go:57] Caching tarball of preloaded images
I0223 14:09:00.978136 18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 14:09:01.000354 18216 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0223 14:09:01.021748 18216 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 14:09:01.238356 18216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 14:09:18.597127 18216 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 14:09:18.597318 18216 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 14:09:19.221001 18216 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0223 14:09:19.221228 18216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json ...
I0223 14:09:19.221255 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json: {Name:mk12bfdb3c9a368b15e2e757666b494b163760fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:19.221537 18216 cache.go:193] Successfully downloaded all kic artifacts
I0223 14:09:19.221564 18216 start.go:364] acquiring machines lock for ingress-addon-legacy-234000: {Name:mk117825bbd4fd1d51609d1f587776a77771cdf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 14:09:19.221695 18216 start.go:368] acquired machines lock for "ingress-addon-legacy-234000" in 123.523µs
I0223 14:09:19.221720 18216 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 14:09:19.221763 18216 start.go:125] createHost starting for "" (driver="docker")
I0223 14:09:19.266015 18216 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0223 14:09:19.266377 18216 start.go:159] libmachine.API.Create for "ingress-addon-legacy-234000" (driver="docker")
I0223 14:09:19.266421 18216 client.go:168] LocalClient.Create starting
I0223 14:09:19.266619 18216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
I0223 14:09:19.266701 18216 main.go:141] libmachine: Decoding PEM data...
I0223 14:09:19.266735 18216 main.go:141] libmachine: Parsing certificate...
I0223 14:09:19.266842 18216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
I0223 14:09:19.266912 18216 main.go:141] libmachine: Decoding PEM data...
I0223 14:09:19.266930 18216 main.go:141] libmachine: Parsing certificate...
I0223 14:09:19.267789 18216 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-234000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0223 14:09:19.326417 18216 cli_runner.go:211] docker network inspect ingress-addon-legacy-234000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0223 14:09:19.326536 18216 network_create.go:281] running [docker network inspect ingress-addon-legacy-234000] to gather additional debugging logs...
I0223 14:09:19.326553 18216 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-234000
W0223 14:09:19.382500 18216 cli_runner.go:211] docker network inspect ingress-addon-legacy-234000 returned with exit code 1
I0223 14:09:19.382528 18216 network_create.go:284] error running [docker network inspect ingress-addon-legacy-234000]: docker network inspect ingress-addon-legacy-234000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-234000
I0223 14:09:19.382541 18216 network_create.go:286] output of [docker network inspect ingress-addon-legacy-234000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-234000
** /stderr **
I0223 14:09:19.382634 18216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0223 14:09:19.436845 18216 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00136ff60}
I0223 14:09:19.436887 18216 network_create.go:123] attempt to create docker network ingress-addon-legacy-234000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0223 14:09:19.436963 18216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 ingress-addon-legacy-234000
I0223 14:09:19.525353 18216 network_create.go:107] docker network ingress-addon-legacy-234000 192.168.49.0/24 created
I0223 14:09:19.525407 18216 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-234000" container
I0223 14:09:19.525536 18216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0223 14:09:19.583501 18216 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-234000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --label created_by.minikube.sigs.k8s.io=true
I0223 14:09:19.638514 18216 oci.go:103] Successfully created a docker volume ingress-addon-legacy-234000
I0223 14:09:19.638663 18216 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-234000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --entrypoint /usr/bin/test -v ingress-addon-legacy-234000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
I0223 14:09:20.061403 18216 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-234000
I0223 14:09:20.061450 18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 14:09:20.061464 18216 kic.go:190] Starting extracting preloaded images to volume ...
I0223 14:09:20.061589 18216 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-234000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
I0223 14:09:26.015735 18216 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-234000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.954098735s)
I0223 14:09:26.015758 18216 kic.go:199] duration metric: took 5.954346 seconds to extract preloaded images to volume
I0223 14:09:26.015873 18216 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0223 14:09:26.164146 18216 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-234000 --name ingress-addon-legacy-234000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --network ingress-addon-legacy-234000 --ip 192.168.49.2 --volume ingress-addon-legacy-234000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
I0223 14:09:26.511842 18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Running}}
I0223 14:09:26.571082 18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
I0223 14:09:26.634126 18216 cli_runner.go:164] Run: docker exec ingress-addon-legacy-234000 stat /var/lib/dpkg/alternatives/iptables
I0223 14:09:26.738389 18216 oci.go:144] the created container "ingress-addon-legacy-234000" has a running status.
I0223 14:09:26.738430 18216 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa...
I0223 14:09:26.883872 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0223 14:09:26.883938 18216 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0223 14:09:27.054106 18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
I0223 14:09:27.112854 18216 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0223 14:09:27.112883 18216 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-234000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0223 14:09:27.213763 18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
I0223 14:09:27.270441 18216 machine.go:88] provisioning docker machine ...
I0223 14:09:27.270486 18216 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-234000"
I0223 14:09:27.270599 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:27.327331 18216 main.go:141] libmachine: Using SSH client type: native
I0223 14:09:27.327724 18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 58153 <nil> <nil>}
I0223 14:09:27.327740 18216 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-234000 && echo "ingress-addon-legacy-234000" | sudo tee /etc/hostname
I0223 14:09:27.470593 18216 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-234000
I0223 14:09:27.470676 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:27.529213 18216 main.go:141] libmachine: Using SSH client type: native
I0223 14:09:27.529576 18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 58153 <nil> <nil>}
I0223 14:09:27.529593 18216 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-234000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-234000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-234000' | sudo tee -a /etc/hosts;
fi
fi
I0223 14:09:27.663564 18216 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 14:09:27.663588 18216 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
I0223 14:09:27.663615 18216 ubuntu.go:177] setting up certificates
I0223 14:09:27.663627 18216 provision.go:83] configureAuth start
I0223 14:09:27.663701 18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
I0223 14:09:27.720189 18216 provision.go:138] copyHostCerts
I0223 14:09:27.720235 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
I0223 14:09:27.720298 18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
I0223 14:09:27.720307 18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
I0223 14:09:27.720414 18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
I0223 14:09:27.720572 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
I0223 14:09:27.720606 18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
I0223 14:09:27.720610 18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
I0223 14:09:27.720684 18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
I0223 14:09:27.720827 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
I0223 14:09:27.720863 18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
I0223 14:09:27.720867 18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
I0223 14:09:27.720928 18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
I0223 14:09:27.721048 18216 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-234000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-234000]
I0223 14:09:27.986410 18216 provision.go:172] copyRemoteCerts
I0223 14:09:27.986479 18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 14:09:27.986538 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:28.044006 18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
I0223 14:09:28.138890 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 14:09:28.138983 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0223 14:09:28.156506 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 14:09:28.156588 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0223 14:09:28.173443 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 14:09:28.173532 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 14:09:28.191197 18216 provision.go:86] duration metric: configureAuth took 527.557432ms
I0223 14:09:28.191217 18216 ubuntu.go:193] setting minikube options for container-runtime
I0223 14:09:28.191375 18216 config.go:182] Loaded profile config "ingress-addon-legacy-234000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0223 14:09:28.191437 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:28.248911 18216 main.go:141] libmachine: Using SSH client type: native
I0223 14:09:28.249261 18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 58153 <nil> <nil>}
I0223 14:09:28.249279 18216 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 14:09:28.385137 18216 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0223 14:09:28.385156 18216 ubuntu.go:71] root file system type: overlay
I0223 14:09:28.385274 18216 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 14:09:28.385363 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:28.441731 18216 main.go:141] libmachine: Using SSH client type: native
I0223 14:09:28.442089 18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 58153 <nil> <nil>}
I0223 14:09:28.442139 18216 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 14:09:28.585791 18216 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 14:09:28.585901 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:28.643223 18216 main.go:141] libmachine: Using SSH client type: native
I0223 14:09:28.643581 18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 58153 <nil> <nil>}
I0223 14:09:28.643596 18216 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 14:09:29.259206 18216 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-23 22:09:28.583009950 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0223 14:09:29.259235 18216 machine.go:91] provisioned docker machine in 1.988789802s
I0223 14:09:29.259241 18216 client.go:171] LocalClient.Create took 9.992902055s
I0223 14:09:29.259258 18216 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-234000" took 9.992972968s
I0223 14:09:29.259269 18216 start.go:300] post-start starting for "ingress-addon-legacy-234000" (driver="docker")
I0223 14:09:29.259276 18216 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 14:09:29.259368 18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 14:09:29.259421 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:29.317545 18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
I0223 14:09:29.412529 18216 ssh_runner.go:195] Run: cat /etc/os-release
I0223 14:09:29.416046 18216 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0223 14:09:29.416067 18216 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0223 14:09:29.416074 18216 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0223 14:09:29.416079 18216 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0223 14:09:29.416089 18216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
I0223 14:09:29.416188 18216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
I0223 14:09:29.416366 18216 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
I0223 14:09:29.416372 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
I0223 14:09:29.416566 18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 14:09:29.423772 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
I0223 14:09:29.440907 18216 start.go:303] post-start completed in 181.630762ms
I0223 14:09:29.441476 18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
I0223 14:09:29.498283 18216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json ...
I0223 14:09:29.498709 18216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0223 14:09:29.498772 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:29.556983 18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
I0223 14:09:29.648368 18216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0223 14:09:29.653138 18216 start.go:128] duration metric: createHost completed in 10.431458333s
I0223 14:09:29.653158 18216 start.go:83] releasing machines lock for "ingress-addon-legacy-234000", held for 10.431548307s
I0223 14:09:29.653280 18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
I0223 14:09:29.710445 18216 ssh_runner.go:195] Run: cat /version.json
I0223 14:09:29.710488 18216 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0223 14:09:29.710523 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:29.710560 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:29.769656 18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
I0223 14:09:29.770191 18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
I0223 14:09:30.119861 18216 ssh_runner.go:195] Run: systemctl --version
I0223 14:09:30.124449 18216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 14:09:30.129431 18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0223 14:09:30.148809 18216 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0223 14:09:30.148899 18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0223 14:09:30.162362 18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0223 14:09:30.169760 18216 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0223 14:09:30.169777 18216 start.go:485] detecting cgroup driver to use...
I0223 14:09:30.169788 18216 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 14:09:30.169876 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 14:09:30.182674 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0223 14:09:30.190880 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 14:09:30.199225 18216 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 14:09:30.199284 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 14:09:30.207631 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 14:09:30.215731 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 14:09:30.223815 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 14:09:30.232151 18216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 14:09:30.239860 18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 14:09:30.248162 18216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 14:09:30.255420 18216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 14:09:30.262379 18216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 14:09:30.329271 18216 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 14:09:30.396351 18216 start.go:485] detecting cgroup driver to use...
I0223 14:09:30.396372 18216 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 14:09:30.396447 18216 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 14:09:30.407153 18216 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0223 14:09:30.407229 18216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 14:09:30.417092 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0223 14:09:30.430752 18216 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 14:09:30.521747 18216 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 14:09:30.612712 18216 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 14:09:30.612734 18216 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0223 14:09:30.625778 18216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 14:09:30.716713 18216 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 14:09:30.933212 18216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 14:09:30.958050 18216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 14:09:31.005379 18216 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0223 14:09:31.005622 18216 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-234000 dig +short host.docker.internal
I0223 14:09:31.117465 18216 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0223 14:09:31.117576 18216 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0223 14:09:31.121874 18216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 14:09:31.131799 18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
I0223 14:09:31.187385 18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 14:09:31.187470 18216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 14:09:31.207365 18216 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 14:09:31.207383 18216 docker.go:560] Images already preloaded, skipping extraction
I0223 14:09:31.207483 18216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 14:09:31.227312 18216 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 14:09:31.227328 18216 cache_images.go:84] Images are preloaded, skipping loading
I0223 14:09:31.227422 18216 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 14:09:31.252625 18216 cni.go:84] Creating CNI manager for ""
I0223 14:09:31.252643 18216 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0223 14:09:31.252659 18216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 14:09:31.252678 18216 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-234000 NodeName:ingress-addon-legacy-234000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0223 14:09:31.252784 18216 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-234000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 14:09:31.252897 18216 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-234000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 14:09:31.252971 18216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0223 14:09:31.260769 18216 binaries.go:44] Found k8s binaries, skipping transfer
I0223 14:09:31.260833 18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 14:09:31.267984 18216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0223 14:09:31.280395 18216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0223 14:09:31.292889 18216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0223 14:09:31.305695 18216 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0223 14:09:31.309546 18216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 14:09:31.319124 18216 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000 for IP: 192.168.49.2
I0223 14:09:31.319142 18216 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.319314 18216 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
I0223 14:09:31.319377 18216 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
I0223 14:09:31.319428 18216 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key
I0223 14:09:31.319440 18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt with IP's: []
I0223 14:09:31.402212 18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt ...
I0223 14:09:31.402221 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt: {Name:mka83784595163acae28f8a405113a29c8ea9c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.402498 18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key ...
I0223 14:09:31.402521 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key: {Name:mk96fcfd95bf7721cd99c441f54df0de6313ebb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.402705 18216 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2
I0223 14:09:31.402719 18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0223 14:09:31.488818 18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 ...
I0223 14:09:31.488827 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2: {Name:mk21bd644e91e2d025473b2665c4f1ebf6259523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.489047 18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2 ...
I0223 14:09:31.489054 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2: {Name:mk6223f0b125b2b52d35b702c877e6102f293e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.489231 18216 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt
I0223 14:09:31.489492 18216 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key
I0223 14:09:31.489671 18216 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key
I0223 14:09:31.489690 18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt with IP's: []
I0223 14:09:31.631795 18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt ...
I0223 14:09:31.631805 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt: {Name:mk5562f0ddb7e97b10f2f26074b304376416df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.632047 18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key ...
I0223 14:09:31.632056 18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key: {Name:mk10285d1a3bc8975016c7e39267005300abacce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 14:09:31.632256 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 14:09:31.632285 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 14:09:31.632305 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 14:09:31.632328 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 14:09:31.632349 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 14:09:31.632371 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 14:09:31.632391 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 14:09:31.632409 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 14:09:31.632506 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
W0223 14:09:31.632552 18216 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
I0223 14:09:31.632562 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
I0223 14:09:31.632594 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
I0223 14:09:31.632643 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
I0223 14:09:31.632678 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
I0223 14:09:31.632742 18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
I0223 14:09:31.632781 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 14:09:31.632802 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
I0223 14:09:31.632821 18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
I0223 14:09:31.633327 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 14:09:31.651252 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0223 14:09:31.668174 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 14:09:31.685086 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0223 14:09:31.702016 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 14:09:31.718811 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0223 14:09:31.735782 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 14:09:31.752805 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0223 14:09:31.769539 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 14:09:31.787007 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
I0223 14:09:31.803958 18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
I0223 14:09:31.820891 18216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 14:09:31.833749 18216 ssh_runner.go:195] Run: openssl version
I0223 14:09:31.839234 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
I0223 14:09:31.847158 18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
I0223 14:09:31.851010 18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
I0223 14:09:31.851056 18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
I0223 14:09:31.856279 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
I0223 14:09:31.864157 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
I0223 14:09:31.872078 18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
I0223 14:09:31.875976 18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
I0223 14:09:31.876029 18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
I0223 14:09:31.881174 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
I0223 14:09:31.889302 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 14:09:31.897170 18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 14:09:31.901395 18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
I0223 14:09:31.901447 18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 14:09:31.906745 18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 14:09:31.914660 18216 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0223 14:09:31.914766 18216 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 14:09:31.933269 18216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 14:09:31.940931 18216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 14:09:31.948092 18216 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 14:09:31.948158 18216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 14:09:31.955351 18216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 14:09:31.955375 18216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 14:09:32.002658 18216 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 14:09:32.002732 18216 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 14:09:32.168079 18216 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 14:09:32.168209 18216 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 14:09:32.168295 18216 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 14:09:32.319675 18216 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 14:09:32.320147 18216 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 14:09:32.320188 18216 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 14:09:32.397172 18216 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 14:09:32.438604 18216 out.go:204] - Generating certificates and keys ...
I0223 14:09:32.438726 18216 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 14:09:32.438813 18216 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 14:09:32.535847 18216 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0223 14:09:32.700723 18216 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0223 14:09:32.802484 18216 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0223 14:09:33.043420 18216 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0223 14:09:33.138958 18216 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0223 14:09:33.139091 18216 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 14:09:33.265888 18216 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0223 14:09:33.266013 18216 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 14:09:33.338315 18216 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0223 14:09:33.658707 18216 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0223 14:09:33.836892 18216 kubeadm.go:322] [certs] Generating "sa" key and public key
I0223 14:09:33.836934 18216 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 14:09:33.958489 18216 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 14:09:34.149348 18216 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 14:09:34.530822 18216 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 14:09:34.791992 18216 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 14:09:34.792675 18216 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 14:09:34.834960 18216 out.go:204] - Booting up control plane ...
I0223 14:09:34.835085 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 14:09:34.835161 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 14:09:34.835281 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 14:09:34.835374 18216 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 14:09:34.835502 18216 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 14:10:14.801852 18216 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 14:10:14.802543 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:10:14.802796 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:10:19.803192 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:10:19.803343 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:10:29.805207 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:10:29.805440 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:10:49.805481 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:10:49.805672 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:11:29.806183 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:11:29.806356 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:11:29.806370 18216 kubeadm.go:322]
I0223 14:11:29.806400 18216 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 14:11:29.806447 18216 kubeadm.go:322] timed out waiting for the condition
I0223 14:11:29.806464 18216 kubeadm.go:322]
I0223 14:11:29.806516 18216 kubeadm.go:322] This error is likely caused by:
I0223 14:11:29.806559 18216 kubeadm.go:322] - The kubelet is not running
I0223 14:11:29.806675 18216 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 14:11:29.806687 18216 kubeadm.go:322]
I0223 14:11:29.806771 18216 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 14:11:29.806818 18216 kubeadm.go:322] - 'systemctl status kubelet'
I0223 14:11:29.806851 18216 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 14:11:29.806857 18216 kubeadm.go:322]
I0223 14:11:29.806963 18216 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 14:11:29.807022 18216 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 14:11:29.807029 18216 kubeadm.go:322]
I0223 14:11:29.807101 18216 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 14:11:29.807158 18216 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 14:11:29.807228 18216 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 14:11:29.807257 18216 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 14:11:29.807263 18216 kubeadm.go:322]
I0223 14:11:29.809880 18216 kubeadm.go:322] W0223 22:09:32.002014 1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 14:11:29.810023 18216 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 14:11:29.810076 18216 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 14:11:29.810200 18216 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 14:11:29.810293 18216 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 14:11:29.810400 18216 kubeadm.go:322] W0223 22:09:34.797029 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 14:11:29.810506 18216 kubeadm.go:322] W0223 22:09:34.797829 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 14:11:29.810582 18216 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 14:11:29.810640 18216 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0223 14:11:29.810856 18216 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:09:32.002014 1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:09:34.797029 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:09:34.797829 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:09:32.002014 1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:09:34.797029 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:09:34.797829 1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0223 14:11:29.810888 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0223 14:11:30.233889 18216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 14:11:30.243645 18216 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 14:11:30.243706 18216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 14:11:30.251214 18216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 14:11:30.251256 18216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 14:11:30.298834 18216 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 14:11:30.298886 18216 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 14:11:30.459544 18216 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 14:11:30.459643 18216 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 14:11:30.459732 18216 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 14:11:30.611028 18216 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 14:11:30.611566 18216 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 14:11:30.611628 18216 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 14:11:30.681966 18216 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 14:11:30.724116 18216 out.go:204] - Generating certificates and keys ...
I0223 14:11:30.724230 18216 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 14:11:30.724303 18216 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 14:11:30.724372 18216 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 14:11:30.724434 18216 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0223 14:11:30.724506 18216 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0223 14:11:30.724558 18216 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0223 14:11:30.724635 18216 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0223 14:11:30.724692 18216 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0223 14:11:30.724745 18216 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 14:11:30.724816 18216 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 14:11:30.724846 18216 kubeadm.go:322] [certs] Using the existing "sa" key
I0223 14:11:30.724923 18216 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 14:11:30.843315 18216 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 14:11:30.941283 18216 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 14:11:31.141792 18216 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 14:11:31.304279 18216 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 14:11:31.304990 18216 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 14:11:31.326699 18216 out.go:204] - Booting up control plane ...
I0223 14:11:31.326888 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 14:11:31.327020 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 14:11:31.327146 18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 14:11:31.327298 18216 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 14:11:31.327573 18216 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 14:12:11.313386 18216 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 14:12:11.314071 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:12:11.314351 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:12:16.314649 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:12:16.314814 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:12:26.316845 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:12:26.317100 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:12:46.317291 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:12:46.317512 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:13:26.319085 18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0223 14:13:26.319333 18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0223 14:13:26.319345 18216 kubeadm.go:322]
I0223 14:13:26.319432 18216 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 14:13:26.319492 18216 kubeadm.go:322] timed out waiting for the condition
I0223 14:13:26.319506 18216 kubeadm.go:322]
I0223 14:13:26.319553 18216 kubeadm.go:322] This error is likely caused by:
I0223 14:13:26.319617 18216 kubeadm.go:322] - The kubelet is not running
I0223 14:13:26.319811 18216 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 14:13:26.319825 18216 kubeadm.go:322]
I0223 14:13:26.319939 18216 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 14:13:26.319983 18216 kubeadm.go:322] - 'systemctl status kubelet'
I0223 14:13:26.320029 18216 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 14:13:26.320043 18216 kubeadm.go:322]
I0223 14:13:26.320193 18216 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 14:13:26.320293 18216 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 14:13:26.320308 18216 kubeadm.go:322]
I0223 14:13:26.320436 18216 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 14:13:26.320492 18216 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 14:13:26.320561 18216 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 14:13:26.320606 18216 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 14:13:26.320616 18216 kubeadm.go:322]
I0223 14:13:26.323451 18216 kubeadm.go:322] W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 14:13:26.323591 18216 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 14:13:26.323676 18216 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0223 14:13:26.323785 18216 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0223 14:13:26.323875 18216 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 14:13:26.323973 18216 kubeadm.go:322] W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 14:13:26.324087 18216 kubeadm.go:322] W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 14:13:26.324165 18216 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 14:13:26.324233 18216 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0223 14:13:26.324245 18216 kubeadm.go:403] StartCluster complete in 3m54.411659639s
I0223 14:13:26.324334 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0223 14:13:26.342685 18216 logs.go:277] 0 containers: []
W0223 14:13:26.342698 18216 logs.go:279] No container was found matching "kube-apiserver"
I0223 14:13:26.342776 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0223 14:13:26.361812 18216 logs.go:277] 0 containers: []
W0223 14:13:26.361825 18216 logs.go:279] No container was found matching "etcd"
I0223 14:13:26.361898 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0223 14:13:26.380843 18216 logs.go:277] 0 containers: []
W0223 14:13:26.380855 18216 logs.go:279] No container was found matching "coredns"
I0223 14:13:26.380920 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0223 14:13:26.399464 18216 logs.go:277] 0 containers: []
W0223 14:13:26.399481 18216 logs.go:279] No container was found matching "kube-scheduler"
I0223 14:13:26.399546 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0223 14:13:26.419477 18216 logs.go:277] 0 containers: []
W0223 14:13:26.419490 18216 logs.go:279] No container was found matching "kube-proxy"
I0223 14:13:26.419564 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0223 14:13:26.438713 18216 logs.go:277] 0 containers: []
W0223 14:13:26.438728 18216 logs.go:279] No container was found matching "kube-controller-manager"
I0223 14:13:26.438808 18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0223 14:13:26.457712 18216 logs.go:277] 0 containers: []
W0223 14:13:26.457727 18216 logs.go:279] No container was found matching "kindnet"
I0223 14:13:26.457734 18216 logs.go:123] Gathering logs for kubelet ...
I0223 14:13:26.457742 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0223 14:13:26.495670 18216 logs.go:123] Gathering logs for dmesg ...
I0223 14:13:26.495684 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0223 14:13:26.508002 18216 logs.go:123] Gathering logs for describe nodes ...
I0223 14:13:26.508018 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0223 14:13:26.560771 18216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0223 14:13:26.560782 18216 logs.go:123] Gathering logs for Docker ...
I0223 14:13:26.560789 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0223 14:13:26.585074 18216 logs.go:123] Gathering logs for container status ...
I0223 14:13:26.585089 18216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0223 14:13:28.632743 18216 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047660062s)
W0223 14:13:28.632872 18216 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 14:13:28.632892 18216 out.go:239] *
*
W0223 14:13:28.633015 18216 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 14:13:28.633031 18216 out.go:239] *
*
W0223 14:13:28.633678 18216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 14:13:28.696517 18216 out.go:177]
W0223 14:13:28.760593 18216 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 22:11:30.298075 3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 22:11:31.309096 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 22:11:31.309794 3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 14:13:28.760728 18216 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0223 14:13:28.760831 18216 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0223 14:13:28.802524 18216 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (268.88s)