=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0203 14:18:36.845852 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:20:53.003593 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:21:10.662032 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.667676 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.678927 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.700477 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.741673 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.823119 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.985294 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:11.305431 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:11.945649 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:13.228081 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:15.789344 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:20.699065 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:21:20.912120 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:31.153857 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:51.636900 2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m16.063963872s)
-- stdout --
* [ingress-addon-legacy-802000] minikube v1.29.0 on Darwin 13.2
- MINIKUBE_LOCATION=15770
- KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-802000 in cluster ingress-addon-legacy-802000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0203 14:18:09.423507 5571 out.go:296] Setting OutFile to fd 1 ...
I0203 14:18:09.423660 5571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0203 14:18:09.423666 5571 out.go:309] Setting ErrFile to fd 2...
I0203 14:18:09.423670 5571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0203 14:18:09.423776 5571 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
I0203 14:18:09.424325 5571 out.go:303] Setting JSON to false
I0203 14:18:09.442577 5571 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1064,"bootTime":1675461625,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0203 14:18:09.442672 5571 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0203 14:18:09.464909 5571 out.go:177] * [ingress-addon-legacy-802000] minikube v1.29.0 on Darwin 13.2
I0203 14:18:09.524870 5571 notify.go:220] Checking for updates...
I0203 14:18:09.546426 5571 out.go:177] - MINIKUBE_LOCATION=15770
I0203 14:18:09.567616 5571 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
I0203 14:18:09.589587 5571 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0203 14:18:09.610706 5571 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0203 14:18:09.632787 5571 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
I0203 14:18:09.654788 5571 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0203 14:18:09.676908 5571 driver.go:365] Setting default libvirt URI to qemu:///system
I0203 14:18:09.741925 5571 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0203 14:18:09.742051 5571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0203 14:18:09.881469 5571 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-03 22:18:09.790770114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0203 14:18:09.925202 5571 out.go:177] * Using the docker driver based on user configuration
I0203 14:18:09.947205 5571 start.go:296] selected driver: docker
I0203 14:18:09.947232 5571 start.go:857] validating driver "docker" against <nil>
I0203 14:18:09.947258 5571 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0203 14:18:09.951114 5571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0203 14:18:10.092697 5571 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-03 22:18:09.999825183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0203 14:18:10.092823 5571 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0203 14:18:10.093003 5571 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0203 14:18:10.114414 5571 out.go:177] * Using Docker Desktop driver with root privileges
I0203 14:18:10.136449 5571 cni.go:84] Creating CNI manager for ""
I0203 14:18:10.136486 5571 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0203 14:18:10.136498 5571 start_flags.go:319] config:
{Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0203 14:18:10.158163 5571 out.go:177] * Starting control plane node ingress-addon-legacy-802000 in cluster ingress-addon-legacy-802000
I0203 14:18:10.180315 5571 cache.go:120] Beginning downloading kic base image for docker with docker
I0203 14:18:10.202664 5571 out.go:177] * Pulling base image ...
I0203 14:18:10.246646 5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0203 14:18:10.246708 5571 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
I0203 14:18:10.298680 5571 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0203 14:18:10.298710 5571 cache.go:57] Caching tarball of preloaded images
I0203 14:18:10.298978 5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0203 14:18:10.326769 5571 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0203 14:18:10.368249 5571 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0203 14:18:10.370813 5571 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
I0203 14:18:10.370832 5571 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
I0203 14:18:10.449232 5571 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0203 14:18:14.926735 5571 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0203 14:18:14.926895 5571 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0203 14:18:15.544927 5571 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0203 14:18:15.545201 5571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json ...
I0203 14:18:15.545226 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json: {Name:mk1d19ec64aab48957c1893a621acdaa55ff6817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:15.545550 5571 cache.go:193] Successfully downloaded all kic artifacts
I0203 14:18:15.545576 5571 start.go:364] acquiring machines lock for ingress-addon-legacy-802000: {Name:mkae2c167f7c614411367460fc6d96a043b50f3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0203 14:18:15.545728 5571 start.go:368] acquired machines lock for "ingress-addon-legacy-802000" in 145.343µs
I0203 14:18:15.545749 5571 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0203 14:18:15.545859 5571 start.go:125] createHost starting for "" (driver="docker")
I0203 14:18:15.568078 5571 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0203 14:18:15.568388 5571 start.go:159] libmachine.API.Create for "ingress-addon-legacy-802000" (driver="docker")
I0203 14:18:15.568460 5571 client.go:168] LocalClient.Create starting
I0203 14:18:15.568656 5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem
I0203 14:18:15.568746 5571 main.go:141] libmachine: Decoding PEM data...
I0203 14:18:15.568778 5571 main.go:141] libmachine: Parsing certificate...
I0203 14:18:15.568870 5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem
I0203 14:18:15.568941 5571 main.go:141] libmachine: Decoding PEM data...
I0203 14:18:15.568962 5571 main.go:141] libmachine: Parsing certificate...
I0203 14:18:15.589554 5571 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-802000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0203 14:18:15.647039 5571 cli_runner.go:211] docker network inspect ingress-addon-legacy-802000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0203 14:18:15.647154 5571 network_create.go:281] running [docker network inspect ingress-addon-legacy-802000] to gather additional debugging logs...
I0203 14:18:15.647172 5571 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-802000
W0203 14:18:15.702198 5571 cli_runner.go:211] docker network inspect ingress-addon-legacy-802000 returned with exit code 1
I0203 14:18:15.702228 5571 network_create.go:284] error running [docker network inspect ingress-addon-legacy-802000]: docker network inspect ingress-addon-legacy-802000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-802000
I0203 14:18:15.702250 5571 network_create.go:286] output of [docker network inspect ingress-addon-legacy-802000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-802000
** /stderr **
I0203 14:18:15.702345 5571 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0203 14:18:15.755975 5571 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00044d5a0}
I0203 14:18:15.756012 5571 network_create.go:123] attempt to create docker network ingress-addon-legacy-802000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0203 14:18:15.756094 5571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 ingress-addon-legacy-802000
I0203 14:18:15.843957 5571 network_create.go:107] docker network ingress-addon-legacy-802000 192.168.49.0/24 created
I0203 14:18:15.843993 5571 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-802000" container
I0203 14:18:15.844122 5571 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0203 14:18:15.897810 5571 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-802000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --label created_by.minikube.sigs.k8s.io=true
I0203 14:18:15.951736 5571 oci.go:103] Successfully created a docker volume ingress-addon-legacy-802000
I0203 14:18:15.951882 5571 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-802000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --entrypoint /usr/bin/test -v ingress-addon-legacy-802000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
I0203 14:18:16.386462 5571 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-802000
I0203 14:18:16.386500 5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0203 14:18:16.386516 5571 kic.go:190] Starting extracting preloaded images to volume ...
I0203 14:18:16.386632 5571 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-802000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
I0203 14:18:22.580800 5571 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-802000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.193929274s)
I0203 14:18:22.580825 5571 kic.go:199] duration metric: took 6.194139 seconds to extract preloaded images to volume
I0203 14:18:22.580942 5571 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0203 14:18:22.721119 5571 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-802000 --name ingress-addon-legacy-802000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --network ingress-addon-legacy-802000 --ip 192.168.49.2 --volume ingress-addon-legacy-802000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
I0203 14:18:23.071913 5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Running}}
I0203 14:18:23.132577 5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
I0203 14:18:23.194306 5571 cli_runner.go:164] Run: docker exec ingress-addon-legacy-802000 stat /var/lib/dpkg/alternatives/iptables
I0203 14:18:23.310391 5571 oci.go:144] the created container "ingress-addon-legacy-802000" has a running status.
I0203 14:18:23.310431 5571 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa...
I0203 14:18:23.356063 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0203 14:18:23.356141 5571 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0203 14:18:23.463561 5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
I0203 14:18:23.524699 5571 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0203 14:18:23.524720 5571 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-802000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0203 14:18:23.630861 5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
I0203 14:18:23.686121 5571 machine.go:88] provisioning docker machine ...
I0203 14:18:23.686156 5571 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-802000"
I0203 14:18:23.686255 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:23.744484 5571 main.go:141] libmachine: Using SSH client type: native
I0203 14:18:23.744685 5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50695 <nil> <nil>}
I0203 14:18:23.744700 5571 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-802000 && echo "ingress-addon-legacy-802000" | sudo tee /etc/hostname
I0203 14:18:23.884102 5571 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-802000
I0203 14:18:23.884197 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:23.942156 5571 main.go:141] libmachine: Using SSH client type: native
I0203 14:18:23.942328 5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50695 <nil> <nil>}
I0203 14:18:23.942343 5571 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-802000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-802000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-802000' | sudo tee -a /etc/hosts;
fi
fi
I0203 14:18:24.071843 5571 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0203 14:18:24.071864 5571 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
I0203 14:18:24.071883 5571 ubuntu.go:177] setting up certificates
I0203 14:18:24.071898 5571 provision.go:83] configureAuth start
I0203 14:18:24.071974 5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
I0203 14:18:24.128740 5571 provision.go:138] copyHostCerts
I0203 14:18:24.128785 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
I0203 14:18:24.128839 5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
I0203 14:18:24.128846 5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
I0203 14:18:24.128975 5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
I0203 14:18:24.129138 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
I0203 14:18:24.129172 5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
I0203 14:18:24.129177 5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
I0203 14:18:24.129249 5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
I0203 14:18:24.129364 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
I0203 14:18:24.129406 5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
I0203 14:18:24.129410 5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
I0203 14:18:24.129472 5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
I0203 14:18:24.129594 5571 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-802000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-802000]
I0203 14:18:24.418431 5571 provision.go:172] copyRemoteCerts
I0203 14:18:24.418488 5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0203 14:18:24.418538 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:24.477732 5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
I0203 14:18:24.569965 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0203 14:18:24.570051 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0203 14:18:24.587504 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem -> /etc/docker/server.pem
I0203 14:18:24.587596 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0203 14:18:24.604489 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0203 14:18:24.604567 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0203 14:18:24.621493 5571 provision.go:86] duration metric: configureAuth took 549.567212ms
I0203 14:18:24.621507 5571 ubuntu.go:193] setting minikube options for container-runtime
I0203 14:18:24.621654 5571 config.go:180] Loaded profile config "ingress-addon-legacy-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0203 14:18:24.621711 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:24.678288 5571 main.go:141] libmachine: Using SSH client type: native
I0203 14:18:24.678463 5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50695 <nil> <nil>}
I0203 14:18:24.678478 5571 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0203 14:18:24.807691 5571 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0203 14:18:24.807704 5571 ubuntu.go:71] root file system type: overlay
I0203 14:18:24.807850 5571 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0203 14:18:24.807940 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:24.864861 5571 main.go:141] libmachine: Using SSH client type: native
I0203 14:18:24.865027 5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50695 <nil> <nil>}
I0203 14:18:24.865079 5571 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0203 14:18:24.999973 5571 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0203 14:18:25.000098 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:25.058151 5571 main.go:141] libmachine: Using SSH client type: native
I0203 14:18:25.058313 5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 127.0.0.1 50695 <nil> <nil>}
I0203 14:18:25.058326 5571 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0203 14:18:25.643207 5571 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-01-19 17:34:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-03 22:18:24.998107984 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0203 14:18:25.643238 5571 machine.go:91] provisioned docker machine in 1.957040904s
I0203 14:18:25.643245 5571 client.go:171] LocalClient.Create took 10.074508294s
I0203 14:18:25.643263 5571 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-802000" took 10.074611027s
I0203 14:18:25.643275 5571 start.go:300] post-start starting for "ingress-addon-legacy-802000" (driver="docker")
I0203 14:18:25.643282 5571 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0203 14:18:25.643360 5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0203 14:18:25.643414 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:25.699986 5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
I0203 14:18:25.792604 5571 ssh_runner.go:195] Run: cat /etc/os-release
I0203 14:18:25.796169 5571 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0203 14:18:25.796188 5571 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0203 14:18:25.796200 5571 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0203 14:18:25.796206 5571 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0203 14:18:25.796216 5571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
I0203 14:18:25.796338 5571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
I0203 14:18:25.796514 5571 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
I0203 14:18:25.796520 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> /etc/ssl/certs/25682.pem
I0203 14:18:25.796718 5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0203 14:18:25.804161 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
I0203 14:18:25.821152 5571 start.go:303] post-start completed in 177.858737ms
I0203 14:18:25.821668 5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
I0203 14:18:25.879231 5571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json ...
I0203 14:18:25.879645 5571 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0203 14:18:25.879703 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:25.935822 5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
I0203 14:18:26.025442 5571 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0203 14:18:26.029858 5571 start.go:128] duration metric: createHost completed in 10.483713619s
I0203 14:18:26.029874 5571 start.go:83] releasing machines lock for "ingress-addon-legacy-802000", held for 10.483859799s
I0203 14:18:26.029949 5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
I0203 14:18:26.085605 5571 ssh_runner.go:195] Run: cat /version.json
I0203 14:18:26.085639 5571 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0203 14:18:26.085672 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:26.085710 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:26.147127 5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
I0203 14:18:26.147297 5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
I0203 14:18:26.428525 5571 ssh_runner.go:195] Run: systemctl --version
I0203 14:18:26.433390 5571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0203 14:18:26.438275 5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0203 14:18:26.458101 5571 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0203 14:18:26.458182 5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0203 14:18:26.471878 5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0203 14:18:26.479454 5571 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0203 14:18:26.479470 5571 start.go:483] detecting cgroup driver to use...
I0203 14:18:26.479482 5571 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0203 14:18:26.479570 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0203 14:18:26.492705 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0203 14:18:26.501122 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0203 14:18:26.509613 5571 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0203 14:18:26.509668 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0203 14:18:26.518054 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0203 14:18:26.526198 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0203 14:18:26.534377 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0203 14:18:26.542633 5571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0203 14:18:26.550511 5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0203 14:18:26.558669 5571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0203 14:18:26.565918 5571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0203 14:18:26.572818 5571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0203 14:18:26.636962 5571 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0203 14:18:26.711656 5571 start.go:483] detecting cgroup driver to use...
I0203 14:18:26.711683 5571 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0203 14:18:26.711747 5571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0203 14:18:26.721948 5571 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0203 14:18:26.722017 5571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0203 14:18:26.731877 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0203 14:18:26.746787 5571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0203 14:18:26.855944 5571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0203 14:18:26.945031 5571 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0203 14:18:26.945046 5571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0203 14:18:26.958968 5571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0203 14:18:27.049286 5571 ssh_runner.go:195] Run: sudo systemctl restart docker
I0203 14:18:27.248310 5571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0203 14:18:27.277652 5571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0203 14:18:27.349692 5571 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
I0203 14:18:27.349858 5571 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-802000 dig +short host.docker.internal
I0203 14:18:27.512009 5571 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0203 14:18:27.512135 5571 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0203 14:18:27.517099 5571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0203 14:18:27.527048 5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
I0203 14:18:27.583724 5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0203 14:18:27.583810 5571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0203 14:18:27.607235 5571 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0203 14:18:27.607253 5571 docker.go:560] Images already preloaded, skipping extraction
I0203 14:18:27.607351 5571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0203 14:18:27.630620 5571 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0203 14:18:27.630640 5571 cache_images.go:84] Images are preloaded, skipping loading
I0203 14:18:27.630725 5571 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0203 14:18:27.701917 5571 cni.go:84] Creating CNI manager for ""
I0203 14:18:27.701935 5571 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0203 14:18:27.701972 5571 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0203 14:18:27.701988 5571 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-802000 NodeName:ingress-addon-legacy-802000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0203 14:18:27.702110 5571 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-802000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0203 14:18:27.702206 5571 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-802000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0203 14:18:27.702271 5571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0203 14:18:27.710067 5571 binaries.go:44] Found k8s binaries, skipping transfer
I0203 14:18:27.710149 5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0203 14:18:27.717521 5571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0203 14:18:27.730507 5571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0203 14:18:27.743163 5571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0203 14:18:27.755783 5571 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0203 14:18:27.759538 5571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0203 14:18:27.769176 5571 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000 for IP: 192.168.49.2
I0203 14:18:27.769197 5571 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:27.769380 5571 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
I0203 14:18:27.769455 5571 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
I0203 14:18:27.769501 5571 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key
I0203 14:18:27.769516 5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt with IP's: []
I0203 14:18:27.835880 5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt ...
I0203 14:18:27.835889 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt: {Name:mk4c7c93e45c89a6fe511fadc98f9279b780aec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:27.836169 5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key ...
I0203 14:18:27.836183 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key: {Name:mkde103a9cc747f76d9558504ded1cb1c7da1102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:27.836383 5571 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2
I0203 14:18:27.836398 5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0203 14:18:27.922651 5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 ...
I0203 14:18:27.922660 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2: {Name:mkd88ebbf7f7bbc9fd988d78230372388c0af50c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:27.922872 5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2 ...
I0203 14:18:27.922880 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2: {Name:mk7c847e8222904c3cb69c64c6bf7cf0ef2c3015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:27.923095 5571 certs.go:333] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt
I0203 14:18:27.923283 5571 certs.go:337] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key
I0203 14:18:27.923451 5571 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key
I0203 14:18:27.923466 5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt with IP's: []
I0203 14:18:28.059984 5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt ...
I0203 14:18:28.059993 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt: {Name:mk30ef9499cded1193335db2e2ed3e4a9595a1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:28.060227 5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key ...
I0203 14:18:28.060235 5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key: {Name:mke14c44f1625324c23ce50fbd2bf2ea5215aacf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0203 14:18:28.060417 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0203 14:18:28.060446 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0203 14:18:28.060466 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0203 14:18:28.060485 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0203 14:18:28.060505 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0203 14:18:28.060523 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0203 14:18:28.060540 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0203 14:18:28.060557 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0203 14:18:28.060663 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
W0203 14:18:28.060718 5571 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
I0203 14:18:28.060729 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
I0203 14:18:28.060761 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
I0203 14:18:28.060795 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
I0203 14:18:28.060824 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
I0203 14:18:28.060892 5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
I0203 14:18:28.060929 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0203 14:18:28.060949 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem -> /usr/share/ca-certificates/2568.pem
I0203 14:18:28.060966 5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> /usr/share/ca-certificates/25682.pem
I0203 14:18:28.061491 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0203 14:18:28.080164 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0203 14:18:28.097134 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0203 14:18:28.114042 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0203 14:18:28.130819 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0203 14:18:28.147873 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0203 14:18:28.164854 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0203 14:18:28.181969 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0203 14:18:28.198845 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0203 14:18:28.216092 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
I0203 14:18:28.233172 5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
I0203 14:18:28.250146 5571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0203 14:18:28.262895 5571 ssh_runner.go:195] Run: openssl version
I0203 14:18:28.268348 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0203 14:18:28.276498 5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0203 14:18:28.280495 5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 3 22:08 /usr/share/ca-certificates/minikubeCA.pem
I0203 14:18:28.280543 5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0203 14:18:28.286044 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0203 14:18:28.294223 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
I0203 14:18:28.302244 5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
I0203 14:18:28.306285 5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 3 22:13 /usr/share/ca-certificates/2568.pem
I0203 14:18:28.306330 5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
I0203 14:18:28.311792 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
I0203 14:18:28.319804 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
I0203 14:18:28.327917 5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
I0203 14:18:28.331821 5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 3 22:13 /usr/share/ca-certificates/25682.pem
I0203 14:18:28.331864 5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
I0203 14:18:28.337331 5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
I0203 14:18:28.345469 5571 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0203 14:18:28.345580 5571 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0203 14:18:28.368922 5571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0203 14:18:28.376741 5571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0203 14:18:28.384282 5571 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0203 14:18:28.384334 5571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0203 14:18:28.391687 5571 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0203 14:18:28.391711 5571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0203 14:18:28.439088 5571 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0203 14:18:28.439139 5571 kubeadm.go:322] [preflight] Running pre-flight checks
I0203 14:18:28.737153 5571 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0203 14:18:28.737236 5571 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0203 14:18:28.737320 5571 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0203 14:18:28.961399 5571 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0203 14:18:28.962087 5571 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0203 14:18:28.962125 5571 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0203 14:18:29.033201 5571 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0203 14:18:29.054996 5571 out.go:204] - Generating certificates and keys ...
I0203 14:18:29.055096 5571 kubeadm.go:322] [certs] Using existing ca certificate authority
I0203 14:18:29.055187 5571 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0203 14:18:29.138062 5571 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0203 14:18:29.222299 5571 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0203 14:18:29.382974 5571 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0203 14:18:29.521554 5571 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0203 14:18:29.663269 5571 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0203 14:18:29.663423 5571 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0203 14:18:29.925220 5571 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0203 14:18:29.925401 5571 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0203 14:18:30.152716 5571 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0203 14:18:30.284926 5571 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0203 14:18:30.383106 5571 kubeadm.go:322] [certs] Generating "sa" key and public key
I0203 14:18:30.383175 5571 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0203 14:18:30.474922 5571 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0203 14:18:30.533920 5571 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0203 14:18:30.661920 5571 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0203 14:18:31.023709 5571 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0203 14:18:31.024235 5571 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0203 14:18:31.045755 5571 out.go:204] - Booting up control plane ...
I0203 14:18:31.045855 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0203 14:18:31.045936 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0203 14:18:31.046004 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0203 14:18:31.046076 5571 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0203 14:18:31.046206 5571 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0203 14:19:11.035228 5571 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0203 14:19:11.036612 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:19:11.036825 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:19:16.037283 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:19:16.037444 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:19:26.039435 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:19:26.039667 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:19:46.040217 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:19:46.040373 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:20:26.043133 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:20:26.043353 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:20:26.043365 5571 kubeadm.go:322]
I0203 14:20:26.043417 5571 kubeadm.go:322] Unfortunately, an error has occurred:
I0203 14:20:26.043486 5571 kubeadm.go:322] timed out waiting for the condition
I0203 14:20:26.043508 5571 kubeadm.go:322]
I0203 14:20:26.043545 5571 kubeadm.go:322] This error is likely caused by:
I0203 14:20:26.043590 5571 kubeadm.go:322] - The kubelet is not running
I0203 14:20:26.043694 5571 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0203 14:20:26.043704 5571 kubeadm.go:322]
I0203 14:20:26.043858 5571 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0203 14:20:26.043896 5571 kubeadm.go:322] - 'systemctl status kubelet'
I0203 14:20:26.043928 5571 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0203 14:20:26.043933 5571 kubeadm.go:322]
I0203 14:20:26.044060 5571 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0203 14:20:26.044156 5571 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0203 14:20:26.044171 5571 kubeadm.go:322]
I0203 14:20:26.044276 5571 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0203 14:20:26.044351 5571 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0203 14:20:26.044441 5571 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0203 14:20:26.044497 5571 kubeadm.go:322] - 'docker logs CONTAINERID'
I0203 14:20:26.044506 5571 kubeadm.go:322]
I0203 14:20:26.047258 5571 kubeadm.go:322] W0203 22:18:28.438298 1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0203 14:20:26.047406 5571 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0203 14:20:26.047457 5571 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0203 14:20:26.047563 5571 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
I0203 14:20:26.047651 5571 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0203 14:20:26.047777 5571 kubeadm.go:322] W0203 22:18:31.028757 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0203 14:20:26.047879 5571 kubeadm.go:322] W0203 22:18:31.029929 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0203 14:20:26.047946 5571 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0203 14:20:26.048018 5571 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0203 14:20:26.048224 5571 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:18:28.438298 1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:18:31.028757 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:18:31.029929 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:18:28.438298 1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:18:31.028757 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:18:31.029929 1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0203 14:20:26.048264 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0203 14:20:26.463049 5571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0203 14:20:26.472673 5571 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0203 14:20:26.472729 5571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0203 14:20:26.480118 5571 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0203 14:20:26.480139 5571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0203 14:20:26.527819 5571 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0203 14:20:26.527878 5571 kubeadm.go:322] [preflight] Running pre-flight checks
I0203 14:20:26.817279 5571 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0203 14:20:26.817400 5571 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0203 14:20:26.817528 5571 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0203 14:20:27.036412 5571 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0203 14:20:27.036838 5571 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0203 14:20:27.036873 5571 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0203 14:20:27.106940 5571 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0203 14:20:27.130532 5571 out.go:204] - Generating certificates and keys ...
I0203 14:20:27.130661 5571 kubeadm.go:322] [certs] Using existing ca certificate authority
I0203 14:20:27.130726 5571 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0203 14:20:27.130814 5571 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0203 14:20:27.130889 5571 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0203 14:20:27.130949 5571 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0203 14:20:27.131013 5571 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0203 14:20:27.131072 5571 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0203 14:20:27.131124 5571 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0203 14:20:27.131199 5571 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0203 14:20:27.131289 5571 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0203 14:20:27.131342 5571 kubeadm.go:322] [certs] Using the existing "sa" key
I0203 14:20:27.131415 5571 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0203 14:20:27.181116 5571 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0203 14:20:27.460691 5571 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0203 14:20:27.772680 5571 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0203 14:20:27.854719 5571 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0203 14:20:27.855252 5571 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0203 14:20:27.876826 5571 out.go:204] - Booting up control plane ...
I0203 14:20:27.877010 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0203 14:20:27.877154 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0203 14:20:27.877286 5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0203 14:20:27.877411 5571 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0203 14:20:27.877711 5571 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0203 14:21:07.871449 5571 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0203 14:21:07.872382 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:21:07.872604 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:21:12.873190 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:21:12.873367 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:21:22.875673 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:21:22.875877 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:21:42.876977 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:21:42.877133 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:22:22.880274 5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0203 14:22:22.880492 5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0203 14:22:22.880507 5571 kubeadm.go:322]
I0203 14:22:22.880548 5571 kubeadm.go:322] Unfortunately, an error has occurred:
I0203 14:22:22.880610 5571 kubeadm.go:322] timed out waiting for the condition
I0203 14:22:22.880636 5571 kubeadm.go:322]
I0203 14:22:22.880718 5571 kubeadm.go:322] This error is likely caused by:
I0203 14:22:22.880759 5571 kubeadm.go:322] - The kubelet is not running
I0203 14:22:22.880877 5571 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0203 14:22:22.880894 5571 kubeadm.go:322]
I0203 14:22:22.881010 5571 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0203 14:22:22.881071 5571 kubeadm.go:322] - 'systemctl status kubelet'
I0203 14:22:22.881126 5571 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0203 14:22:22.881130 5571 kubeadm.go:322]
I0203 14:22:22.881211 5571 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0203 14:22:22.881276 5571 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0203 14:22:22.881282 5571 kubeadm.go:322]
I0203 14:22:22.881363 5571 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0203 14:22:22.881413 5571 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0203 14:22:22.881486 5571 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0203 14:22:22.881514 5571 kubeadm.go:322] - 'docker logs CONTAINERID'
I0203 14:22:22.881520 5571 kubeadm.go:322]
I0203 14:22:22.884563 5571 kubeadm.go:322] W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0203 14:22:22.884765 5571 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0203 14:22:22.884833 5571 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0203 14:22:22.884946 5571 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
I0203 14:22:22.885030 5571 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0203 14:22:22.885120 5571 kubeadm.go:322] W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0203 14:22:22.885228 5571 kubeadm.go:322] W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0203 14:22:22.885309 5571 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0203 14:22:22.885366 5571 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0203 14:22:22.885411 5571 kubeadm.go:403] StartCluster complete in 3m54.525667948s
I0203 14:22:22.885500 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0203 14:22:22.907479 5571 logs.go:279] 0 containers: []
W0203 14:22:22.907492 5571 logs.go:281] No container was found matching "kube-apiserver"
I0203 14:22:22.907558 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0203 14:22:22.929736 5571 logs.go:279] 0 containers: []
W0203 14:22:22.929749 5571 logs.go:281] No container was found matching "etcd"
I0203 14:22:22.929817 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0203 14:22:22.951906 5571 logs.go:279] 0 containers: []
W0203 14:22:22.951918 5571 logs.go:281] No container was found matching "coredns"
I0203 14:22:22.951986 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0203 14:22:22.974675 5571 logs.go:279] 0 containers: []
W0203 14:22:22.974689 5571 logs.go:281] No container was found matching "kube-scheduler"
I0203 14:22:22.974756 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0203 14:22:22.999245 5571 logs.go:279] 0 containers: []
W0203 14:22:22.999258 5571 logs.go:281] No container was found matching "kube-proxy"
I0203 14:22:22.999338 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0203 14:22:23.022331 5571 logs.go:279] 0 containers: []
W0203 14:22:23.022345 5571 logs.go:281] No container was found matching "kubernetes-dashboard"
I0203 14:22:23.022412 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0203 14:22:23.046201 5571 logs.go:279] 0 containers: []
W0203 14:22:23.046216 5571 logs.go:281] No container was found matching "storage-provisioner"
I0203 14:22:23.046300 5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0203 14:22:23.069025 5571 logs.go:279] 0 containers: []
W0203 14:22:23.069039 5571 logs.go:281] No container was found matching "kube-controller-manager"
I0203 14:22:23.069046 5571 logs.go:124] Gathering logs for dmesg ...
I0203 14:22:23.069057 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0203 14:22:23.081102 5571 logs.go:124] Gathering logs for describe nodes ...
I0203 14:22:23.081115 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0203 14:22:23.134484 5571 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0203 14:22:23.134496 5571 logs.go:124] Gathering logs for Docker ...
I0203 14:22:23.134507 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0203 14:22:23.151630 5571 logs.go:124] Gathering logs for container status ...
I0203 14:22:23.151646 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0203 14:22:25.203035 5571 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051321976s)
I0203 14:22:25.203149 5571 logs.go:124] Gathering logs for kubelet ...
I0203 14:22:25.203157 5571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0203 14:22:25.241728 5571 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0203 14:22:25.241749 5571 out.go:239] *
*
W0203 14:22:25.241866 5571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0203 14:22:25.241879 5571 out.go:239] *
*
W0203 14:22:25.242523 5571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0203 14:22:25.305410 5571 out.go:177]
W0203 14:22:25.348597 5571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0203 22:20:26.526982 3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0203 22:20:27.859197 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0203 22:20:27.859976 3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0203 14:22:25.348776 5571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0203 14:22:25.348871 5571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0203 14:22:25.391192 5571 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (256.10s)