=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-136000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0108 21:32:39.263005 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/addons-039000/client.crt: no such file or directory
E0108 21:33:06.958256 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/addons-039000/client.crt: no such file or directory
E0108 21:33:28.111151 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.117563 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.129889 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.152100 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.194078 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.275003 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.437230 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:28.758231 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:29.400552 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:30.680987 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:33.243165 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:38.364521 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:33:48.604844 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:34:09.087169 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
E0108 21:34:50.047647 13408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/functional-642000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-136000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.970344746s)
-- stdout --
* [ingress-addon-legacy-136000] minikube v1.32.0 on Darwin 14.2.1
- MINIKUBE_LOCATION=17830
- KUBECONFIG=/Users/jenkins/minikube-integration/17830-12965/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17830-12965/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-136000 in cluster ingress-addon-legacy-136000
* Pulling base image v0.0.42-1704751654-17830 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0108 21:30:48.429398 16187 out.go:296] Setting OutFile to fd 1 ...
I0108 21:30:48.429614 16187 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:30:48.429619 16187 out.go:309] Setting ErrFile to fd 2...
I0108 21:30:48.429623 16187 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:30:48.429808 16187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17830-12965/.minikube/bin
I0108 21:30:48.431364 16187 out.go:303] Setting JSON to false
I0108 21:30:48.453989 16187 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":45019,"bootTime":1704733229,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0108 21:30:48.454105 16187 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0108 21:30:48.475513 16187 out.go:177] * [ingress-addon-legacy-136000] minikube v1.32.0 on Darwin 14.2.1
I0108 21:30:48.497301 16187 out.go:177] - MINIKUBE_LOCATION=17830
I0108 21:30:48.497484 16187 notify.go:220] Checking for updates...
I0108 21:30:48.518240 16187 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17830-12965/kubeconfig
I0108 21:30:48.540377 16187 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 21:30:48.562108 16187 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 21:30:48.583264 16187 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17830-12965/.minikube
I0108 21:30:48.604256 16187 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0108 21:30:48.625545 16187 driver.go:392] Setting default libvirt URI to qemu:///system
I0108 21:30:48.683126 16187 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
I0108 21:30:48.683289 16187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 21:30:48.784986 16187 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 05:30:48.775245428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0108 21:30:48.828397 16187 out.go:177] * Using the docker driver based on user configuration
I0108 21:30:48.849173 16187 start.go:298] selected driver: docker
I0108 21:30:48.849203 16187 start.go:902] validating driver "docker" against <nil>
I0108 21:30:48.849228 16187 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 21:30:48.853723 16187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 21:30:48.956290 16187 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 05:30:48.946126182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0108 21:30:48.956479 16187 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0108 21:30:48.956656 16187 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 21:30:48.978018 16187 out.go:177] * Using Docker Desktop driver with root privileges
I0108 21:30:48.999957 16187 cni.go:84] Creating CNI manager for ""
I0108 21:30:48.999995 16187 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0108 21:30:49.000012 16187 start_flags.go:323] config:
{Name:ingress-addon-legacy-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0108 21:30:49.042731 16187 out.go:177] * Starting control plane node ingress-addon-legacy-136000 in cluster ingress-addon-legacy-136000
I0108 21:30:49.063977 16187 cache.go:121] Beginning downloading kic base image for docker with docker
I0108 21:30:49.106907 16187 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
I0108 21:30:49.127865 16187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 21:30:49.127965 16187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
I0108 21:30:49.183018 16187 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 21:30:49.183042 16187 cache.go:56] Caching tarball of preloaded images
I0108 21:30:49.183124 16187 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
I0108 21:30:49.183140 16187 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
I0108 21:30:49.183252 16187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 21:30:49.224953 16187 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0108 21:30:49.245702 16187 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 21:30:49.324616 16187 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 21:30:57.334962 16187 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 21:30:57.335177 16187 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 21:30:57.967648 16187 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0108 21:30:57.967893 16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/config.json ...
I0108 21:30:57.967918 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/config.json: {Name:mk89c9959dbe2434e5737d071aa460344321bf62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:30:57.968230 16187 cache.go:194] Successfully downloaded all kic artifacts
I0108 21:30:57.968260 16187 start.go:365] acquiring machines lock for ingress-addon-legacy-136000: {Name:mkcc5d11730e8b339757cf0a28e93a7305e04509 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 21:30:57.968353 16187 start.go:369] acquired machines lock for "ingress-addon-legacy-136000" in 86.191µs
I0108 21:30:57.968374 16187 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-136000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 21:30:57.968421 16187 start.go:125] createHost starting for "" (driver="docker")
I0108 21:30:58.021397 16187 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0108 21:30:58.021726 16187 start.go:159] libmachine.API.Create for "ingress-addon-legacy-136000" (driver="docker")
I0108 21:30:58.021776 16187 client.go:168] LocalClient.Create starting
I0108 21:30:58.021949 16187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem
I0108 21:30:58.022051 16187 main.go:141] libmachine: Decoding PEM data...
I0108 21:30:58.022083 16187 main.go:141] libmachine: Parsing certificate...
I0108 21:30:58.022168 16187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/cert.pem
I0108 21:30:58.022240 16187 main.go:141] libmachine: Decoding PEM data...
I0108 21:30:58.022261 16187 main.go:141] libmachine: Parsing certificate...
I0108 21:30:58.023116 16187 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-136000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0108 21:30:58.075600 16187 cli_runner.go:211] docker network inspect ingress-addon-legacy-136000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0108 21:30:58.075724 16187 network_create.go:281] running [docker network inspect ingress-addon-legacy-136000] to gather additional debugging logs...
I0108 21:30:58.075749 16187 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-136000
W0108 21:30:58.126696 16187 cli_runner.go:211] docker network inspect ingress-addon-legacy-136000 returned with exit code 1
I0108 21:30:58.126737 16187 network_create.go:284] error running [docker network inspect ingress-addon-legacy-136000]: docker network inspect ingress-addon-legacy-136000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-136000 not found
I0108 21:30:58.126756 16187 network_create.go:286] output of [docker network inspect ingress-addon-legacy-136000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-136000 not found
** /stderr **
I0108 21:30:58.126928 16187 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 21:30:58.179676 16187 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049d780}
I0108 21:30:58.179718 16187 network_create.go:124] attempt to create docker network ingress-addon-legacy-136000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0108 21:30:58.179794 16187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-136000 ingress-addon-legacy-136000
I0108 21:30:58.268616 16187 network_create.go:108] docker network ingress-addon-legacy-136000 192.168.49.0/24 created
I0108 21:30:58.268686 16187 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-136000" container
I0108 21:30:58.268835 16187 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0108 21:30:58.320307 16187 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-136000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-136000 --label created_by.minikube.sigs.k8s.io=true
I0108 21:30:58.372088 16187 oci.go:103] Successfully created a docker volume ingress-addon-legacy-136000
I0108 21:30:58.372216 16187 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-136000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-136000 --entrypoint /usr/bin/test -v ingress-addon-legacy-136000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
I0108 21:30:58.749496 16187 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-136000
I0108 21:30:58.749548 16187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 21:30:58.749560 16187 kic.go:194] Starting extracting preloaded images to volume ...
I0108 21:30:58.749683 16187 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-136000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
I0108 21:31:00.915375 16187 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-136000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (2.165607039s)
I0108 21:31:00.915404 16187 kic.go:203] duration metric: took 2.165836 seconds to extract preloaded images to volume
I0108 21:31:00.915508 16187 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0108 21:31:01.018485 16187 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-136000 --name ingress-addon-legacy-136000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-136000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-136000 --network ingress-addon-legacy-136000 --ip 192.168.49.2 --volume ingress-addon-legacy-136000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
I0108 21:31:01.295393 16187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-136000 --format={{.State.Running}}
I0108 21:31:01.350174 16187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-136000 --format={{.State.Status}}
I0108 21:31:01.405457 16187 cli_runner.go:164] Run: docker exec ingress-addon-legacy-136000 stat /var/lib/dpkg/alternatives/iptables
I0108 21:31:01.560421 16187 oci.go:144] the created container "ingress-addon-legacy-136000" has a running status.
I0108 21:31:01.560454 16187 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa...
I0108 21:31:01.726352 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0108 21:31:01.726416 16187 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0108 21:31:01.788465 16187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-136000 --format={{.State.Status}}
I0108 21:31:01.843888 16187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0108 21:31:01.843913 16187 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-136000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0108 21:31:01.945812 16187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-136000 --format={{.State.Status}}
I0108 21:31:01.997248 16187 machine.go:88] provisioning docker machine ...
I0108 21:31:01.997291 16187 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-136000"
I0108 21:31:01.997398 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:02.048570 16187 main.go:141] libmachine: Using SSH client type: native
I0108 21:31:02.048910 16187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 59060 <nil> <nil>}
I0108 21:31:02.048923 16187 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-136000 && echo "ingress-addon-legacy-136000" | sudo tee /etc/hostname
I0108 21:31:02.194797 16187 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-136000
I0108 21:31:02.194944 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:02.246511 16187 main.go:141] libmachine: Using SSH client type: native
I0108 21:31:02.246801 16187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 59060 <nil> <nil>}
I0108 21:31:02.246821 16187 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-136000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-136000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-136000' | sudo tee -a /etc/hosts;
fi
fi
I0108 21:31:02.383928 16187 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0108 21:31:02.383958 16187 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17830-12965/.minikube CaCertPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17830-12965/.minikube}
I0108 21:31:02.383979 16187 ubuntu.go:177] setting up certificates
I0108 21:31:02.383995 16187 provision.go:83] configureAuth start
I0108 21:31:02.384082 16187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-136000
I0108 21:31:02.435214 16187 provision.go:138] copyHostCerts
I0108 21:31:02.435258 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.pem
I0108 21:31:02.435308 16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.pem, removing ...
I0108 21:31:02.435314 16187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.pem
I0108 21:31:02.435446 16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.pem (1078 bytes)
I0108 21:31:02.435653 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17830-12965/.minikube/cert.pem
I0108 21:31:02.435681 16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/17830-12965/.minikube/cert.pem, removing ...
I0108 21:31:02.435686 16187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17830-12965/.minikube/cert.pem
I0108 21:31:02.435770 16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17830-12965/.minikube/cert.pem (1123 bytes)
I0108 21:31:02.435926 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17830-12965/.minikube/key.pem
I0108 21:31:02.435962 16187 exec_runner.go:144] found /Users/jenkins/minikube-integration/17830-12965/.minikube/key.pem, removing ...
I0108 21:31:02.435967 16187 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17830-12965/.minikube/key.pem
I0108 21:31:02.436050 16187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17830-12965/.minikube/key.pem (1679 bytes)
I0108 21:31:02.436207 16187 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-136000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-136000]
I0108 21:31:02.537006 16187 provision.go:172] copyRemoteCerts
I0108 21:31:02.537061 16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 21:31:02.537115 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:02.588756 16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59060 SSHKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa Username:docker}
I0108 21:31:02.686053 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0108 21:31:02.686133 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0108 21:31:02.706180 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server.pem -> /etc/docker/server.pem
I0108 21:31:02.706261 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0108 21:31:02.726775 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0108 21:31:02.726857 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 21:31:02.746958 16187 provision.go:86] duration metric: configureAuth took 362.948377ms
I0108 21:31:02.746975 16187 ubuntu.go:193] setting minikube options for container-runtime
I0108 21:31:02.747135 16187 config.go:182] Loaded profile config "ingress-addon-legacy-136000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0108 21:31:02.747198 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:02.798741 16187 main.go:141] libmachine: Using SSH client type: native
I0108 21:31:02.799069 16187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 59060 <nil> <nil>}
I0108 21:31:02.799090 16187 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 21:31:02.932078 16187 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0108 21:31:02.932095 16187 ubuntu.go:71] root file system type: overlay
I0108 21:31:02.932202 16187 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 21:31:02.932282 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:02.983747 16187 main.go:141] libmachine: Using SSH client type: native
I0108 21:31:02.984033 16187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 59060 <nil> <nil>}
I0108 21:31:02.984086 16187 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 21:31:03.129670 16187 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 21:31:03.129765 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:03.181620 16187 main.go:141] libmachine: Using SSH client type: native
I0108 21:31:03.181918 16187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 59060 <nil> <nil>}
I0108 21:31:03.181930 16187 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 21:31:03.761353 16187 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-01-09 05:31:03.126760814 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0108 21:31:03.761378 16187 machine.go:91] provisioned docker machine in 1.764101782s
I0108 21:31:03.761386 16187 client.go:171] LocalClient.Create took 5.739587257s
I0108 21:31:03.761402 16187 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-136000" took 5.739661424s
I0108 21:31:03.761412 16187 start.go:300] post-start starting for "ingress-addon-legacy-136000" (driver="docker")
I0108 21:31:03.761420 16187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 21:31:03.761495 16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 21:31:03.761567 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:03.815598 16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59060 SSHKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa Username:docker}
I0108 21:31:03.912414 16187 ssh_runner.go:195] Run: cat /etc/os-release
I0108 21:31:03.916122 16187 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 21:31:03.916148 16187 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 21:31:03.916156 16187 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 21:31:03.916161 16187 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0108 21:31:03.916171 16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17830-12965/.minikube/addons for local assets ...
I0108 21:31:03.916275 16187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17830-12965/.minikube/files for local assets ...
I0108 21:31:03.916462 16187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem -> 134082.pem in /etc/ssl/certs
I0108 21:31:03.916469 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem -> /etc/ssl/certs/134082.pem
I0108 21:31:03.916681 16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 21:31:03.924695 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem --> /etc/ssl/certs/134082.pem (1708 bytes)
I0108 21:31:03.945203 16187 start.go:303] post-start completed in 183.780401ms
I0108 21:31:03.945784 16187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-136000
I0108 21:31:03.996877 16187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/config.json ...
I0108 21:31:03.997344 16187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 21:31:03.997402 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:04.049259 16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59060 SSHKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa Username:docker}
I0108 21:31:04.141870 16187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 21:31:04.146744 16187 start.go:128] duration metric: createHost completed in 6.178294078s
I0108 21:31:04.146757 16187 start.go:83] releasing machines lock for "ingress-addon-legacy-136000", held for 6.178378482s
I0108 21:31:04.146849 16187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-136000
I0108 21:31:04.200571 16187 ssh_runner.go:195] Run: cat /version.json
I0108 21:31:04.200608 16187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 21:31:04.200642 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:04.200701 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:04.254680 16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59060 SSHKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa Username:docker}
I0108 21:31:04.254683 16187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59060 SSHKeyPath:/Users/jenkins/minikube-integration/17830-12965/.minikube/machines/ingress-addon-legacy-136000/id_rsa Username:docker}
I0108 21:31:04.454027 16187 ssh_runner.go:195] Run: systemctl --version
I0108 21:31:04.458925 16187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0108 21:31:04.463734 16187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0108 21:31:04.485591 16187 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0108 21:31:04.485658 16187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0108 21:31:04.500753 16187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0108 21:31:04.515584 16187 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0108 21:31:04.515598 16187 start.go:475] detecting cgroup driver to use...
I0108 21:31:04.515610 16187 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0108 21:31:04.515724 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 21:31:04.530364 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0108 21:31:04.539672 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0108 21:31:04.548835 16187 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0108 21:31:04.548899 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0108 21:31:04.558194 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 21:31:04.567321 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0108 21:31:04.576475 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 21:31:04.585762 16187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0108 21:31:04.594669 16187 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0108 21:31:04.604155 16187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 21:31:04.612105 16187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 21:31:04.620121 16187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:31:04.671672 16187 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 21:31:04.754900 16187 start.go:475] detecting cgroup driver to use...
I0108 21:31:04.754935 16187 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0108 21:31:04.755003 16187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 21:31:04.770635 16187 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0108 21:31:04.770707 16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 21:31:04.781984 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0108 21:31:04.799277 16187 ssh_runner.go:195] Run: which cri-dockerd
I0108 21:31:04.803769 16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0108 21:31:04.813013 16187 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0108 21:31:04.857486 16187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 21:31:04.915511 16187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 21:31:04.995271 16187 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0108 21:31:04.995368 16187 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0108 21:31:05.011255 16187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:31:05.083770 16187 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 21:31:05.332381 16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 21:31:05.359462 16187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 21:31:05.417495 16187 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0108 21:31:05.417620 16187 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-136000 dig +short host.docker.internal
I0108 21:31:05.553245 16187 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0108 21:31:05.553378 16187 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0108 21:31:05.559351 16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 21:31:05.571570 16187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-136000
I0108 21:31:05.623704 16187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 21:31:05.623785 16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 21:31:05.643662 16187 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 21:31:05.643676 16187 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0108 21:31:05.643730 16187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 21:31:05.652386 16187 ssh_runner.go:195] Run: which lz4
I0108 21:31:05.656515 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0108 21:31:05.656628 16187 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0108 21:31:05.660805 16187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0108 21:31:05.660847 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0108 21:31:11.360967 16187 docker.go:635] Took 5.704362 seconds to copy over tarball
I0108 21:31:11.361057 16187 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0108 21:31:12.996850 16187 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.635764563s)
I0108 21:31:12.996881 16187 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0108 21:31:13.042999 16187 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 21:31:13.051635 16187 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0108 21:31:13.066799 16187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:31:13.119029 16187 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 21:31:14.252917 16187 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.133853097s)
I0108 21:31:14.253007 16187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 21:31:14.271451 16187 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 21:31:14.271466 16187 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0108 21:31:14.271476 16187 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0108 21:31:14.276217 16187 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 21:31:14.276308 16187 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 21:31:14.277280 16187 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 21:31:14.277727 16187 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 21:31:14.277848 16187 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0108 21:31:14.278376 16187 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0108 21:31:14.278443 16187 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0108 21:31:14.278719 16187 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0108 21:31:14.281754 16187 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 21:31:14.282057 16187 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 21:31:14.285312 16187 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 21:31:14.285367 16187 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0108 21:31:14.285387 16187 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0108 21:31:14.285383 16187 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0108 21:31:14.285401 16187 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0108 21:31:14.285439 16187 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 21:31:14.786530 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0108 21:31:14.790914 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0108 21:31:14.805429 16187 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0108 21:31:14.805471 16187 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 21:31:14.805539 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0108 21:31:14.810463 16187 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0108 21:31:14.810490 16187 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 21:31:14.810553 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0108 21:31:14.826319 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0108 21:31:14.827522 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0108 21:31:14.831009 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0108 21:31:14.847282 16187 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0108 21:31:14.847315 16187 docker.go:323] Removing image: registry.k8s.io/pause:3.2
I0108 21:31:14.847386 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0108 21:31:14.847824 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0108 21:31:14.864406 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0108 21:31:14.868117 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0108 21:31:14.868161 16187 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0108 21:31:14.868181 16187 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0108 21:31:14.868234 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0108 21:31:14.886386 16187 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0108 21:31:14.886419 16187 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
I0108 21:31:14.886485 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0108 21:31:14.888784 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0108 21:31:14.894992 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0108 21:31:14.905580 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0108 21:31:14.909105 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0108 21:31:14.928149 16187 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0108 21:31:14.928173 16187 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
I0108 21:31:14.928243 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0108 21:31:14.951791 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0108 21:31:14.974475 16187 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0108 21:31:14.993254 16187 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0108 21:31:14.993285 16187 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 21:31:14.993350 16187 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0108 21:31:15.011474 16187 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0108 21:31:15.011519 16187 cache_images.go:92] LoadImages completed in 740.031328ms
W0108 21:31:15.011560 16187 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17830-12965/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
I0108 21:31:15.011642 16187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 21:31:15.058740 16187 cni.go:84] Creating CNI manager for ""
I0108 21:31:15.058756 16187 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0108 21:31:15.058770 16187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 21:31:15.058784 16187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-136000 NodeName:ingress-addon-legacy-136000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0108 21:31:15.058920 16187 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-136000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 21:31:15.058995 16187 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 21:31:15.059053 16187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0108 21:31:15.067589 16187 binaries.go:44] Found k8s binaries, skipping transfer
I0108 21:31:15.067646 16187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 21:31:15.075996 16187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0108 21:31:15.091342 16187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0108 21:31:15.106717 16187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0108 21:31:15.122110 16187 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0108 21:31:15.126234 16187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 21:31:15.136536 16187 certs.go:56] Setting up /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000 for IP: 192.168.49.2
I0108 21:31:15.136553 16187 certs.go:190] acquiring lock for shared ca certs: {Name:mk9c9fd1f75dd7a8bffbb58ea79df4e56e8e667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.136725 16187 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.key
I0108 21:31:15.136794 16187 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17830-12965/.minikube/proxy-client-ca.key
I0108 21:31:15.136837 16187 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.key
I0108 21:31:15.136850 16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.crt with IP's: []
I0108 21:31:15.302852 16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.crt ...
I0108 21:31:15.302867 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.crt: {Name:mka2b35eec83fe03a16217383e95dc9760ecdf26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.303204 16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.key ...
I0108 21:31:15.303213 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/client.key: {Name:mkb44f3d7c2e7c7c1af16783e0b9016295679140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.303437 16187 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key.dd3b5fb2
I0108 21:31:15.303452 16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0108 21:31:15.372097 16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt.dd3b5fb2 ...
I0108 21:31:15.372105 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt.dd3b5fb2: {Name:mkd952b71cc79599a8805150af5e44cd12ab3276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.372353 16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key.dd3b5fb2 ...
I0108 21:31:15.372361 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key.dd3b5fb2: {Name:mkea2de8797b52668599505f2d0b1205e5c16220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.372559 16187 certs.go:337] copying /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt
I0108 21:31:15.372738 16187 certs.go:341] copying /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key
I0108 21:31:15.372905 16187 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.key
I0108 21:31:15.372918 16187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.crt with IP's: []
I0108 21:31:15.476613 16187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.crt ...
I0108 21:31:15.476625 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.crt: {Name:mk4105cd25b9f96494a13dfe9fd803a5bede0dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.476900 16187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.key ...
I0108 21:31:15.476912 16187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.key: {Name:mkef26f63a7957b532e3acc78459f3ff15dbd96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 21:31:15.477128 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0108 21:31:15.477155 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0108 21:31:15.477173 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0108 21:31:15.477193 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0108 21:31:15.477217 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0108 21:31:15.477234 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0108 21:31:15.477255 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0108 21:31:15.477271 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0108 21:31:15.477363 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/13408.pem (1338 bytes)
W0108 21:31:15.477409 16187 certs.go:433] ignoring /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/13408_empty.pem, impossibly tiny 0 bytes
I0108 21:31:15.477418 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca-key.pem (1675 bytes)
I0108 21:31:15.477452 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/ca.pem (1078 bytes)
I0108 21:31:15.477487 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/cert.pem (1123 bytes)
I0108 21:31:15.477517 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/certs/key.pem (1679 bytes)
I0108 21:31:15.477578 16187 certs.go:437] found cert: /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem (1708 bytes)
I0108 21:31:15.477612 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/13408.pem -> /usr/share/ca-certificates/13408.pem
I0108 21:31:15.477637 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem -> /usr/share/ca-certificates/134082.pem
I0108 21:31:15.477657 16187 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0108 21:31:15.478103 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 21:31:15.498854 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 21:31:15.519332 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 21:31:15.539682 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/profiles/ingress-addon-legacy-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0108 21:31:15.560313 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 21:31:15.580741 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 21:31:15.600997 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 21:31:15.621342 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0108 21:31:15.641907 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/certs/13408.pem --> /usr/share/ca-certificates/13408.pem (1338 bytes)
I0108 21:31:15.662234 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/files/etc/ssl/certs/134082.pem --> /usr/share/ca-certificates/134082.pem (1708 bytes)
I0108 21:31:15.682578 16187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17830-12965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 21:31:15.703082 16187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 21:31:15.718417 16187 ssh_runner.go:195] Run: openssl version
I0108 21:31:15.723931 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134082.pem && ln -fs /usr/share/ca-certificates/134082.pem /etc/ssl/certs/134082.pem"
I0108 21:31:15.733032 16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134082.pem
I0108 21:31:15.737132 16187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 9 05:25 /usr/share/ca-certificates/134082.pem
I0108 21:31:15.737176 16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134082.pem
I0108 21:31:15.743654 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134082.pem /etc/ssl/certs/3ec20f2e.0"
I0108 21:31:15.752452 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 21:31:15.761363 16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 21:31:15.765441 16187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 9 05:19 /usr/share/ca-certificates/minikubeCA.pem
I0108 21:31:15.765492 16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 21:31:15.771898 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 21:31:15.780786 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13408.pem && ln -fs /usr/share/ca-certificates/13408.pem /etc/ssl/certs/13408.pem"
I0108 21:31:15.789642 16187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13408.pem
I0108 21:31:15.793744 16187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 9 05:25 /usr/share/ca-certificates/13408.pem
I0108 21:31:15.793792 16187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13408.pem
I0108 21:31:15.800281 16187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13408.pem /etc/ssl/certs/51391683.0"
I0108 21:31:15.809109 16187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0108 21:31:15.812907 16187 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0108 21:31:15.812961 16187 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-136000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0108 21:31:15.813075 16187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 21:31:15.831538 16187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 21:31:15.840114 16187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 21:31:15.848381 16187 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:31:15.848437 16187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:31:15.856575 16187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:31:15.856601 16187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:31:15.903209 16187 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0108 21:31:15.903257 16187 kubeadm.go:322] [preflight] Running pre-flight checks
I0108 21:31:16.136604 16187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 21:31:16.136694 16187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 21:31:16.136783 16187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 21:31:16.306079 16187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 21:31:16.306987 16187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 21:31:16.307033 16187 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0108 21:31:16.375968 16187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 21:31:16.419372 16187 out.go:204] - Generating certificates and keys ...
I0108 21:31:16.419449 16187 kubeadm.go:322] [certs] Using existing ca certificate authority
I0108 21:31:16.419516 16187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0108 21:31:16.579151 16187 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0108 21:31:16.684368 16187 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0108 21:31:16.816316 16187 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0108 21:31:16.891485 16187 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0108 21:31:16.981655 16187 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0108 21:31:16.981816 16187 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 21:31:17.219122 16187 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0108 21:31:17.219247 16187 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 21:31:17.318974 16187 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0108 21:31:17.504469 16187 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0108 21:31:17.642796 16187 kubeadm.go:322] [certs] Generating "sa" key and public key
I0108 21:31:17.642850 16187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 21:31:17.805340 16187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 21:31:17.883951 16187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 21:31:18.128450 16187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 21:31:18.300458 16187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 21:31:18.300910 16187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 21:31:18.322413 16187 out.go:204] - Booting up control plane ...
I0108 21:31:18.322564 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 21:31:18.322687 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 21:31:18.322807 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 21:31:18.322951 16187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 21:31:18.323202 16187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 21:31:58.310291 16187 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0108 21:31:58.311092 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:31:58.311319 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:32:03.312870 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:32:03.313107 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:32:13.315239 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:32:13.315464 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:32:33.316465 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:32:33.316681 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:33:13.318649 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:33:13.318883 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:33:13.318913 16187 kubeadm.go:322]
I0108 21:33:13.318958 16187 kubeadm.go:322] Unfortunately, an error has occurred:
I0108 21:33:13.318996 16187 kubeadm.go:322] timed out waiting for the condition
I0108 21:33:13.319004 16187 kubeadm.go:322]
I0108 21:33:13.319045 16187 kubeadm.go:322] This error is likely caused by:
I0108 21:33:13.319079 16187 kubeadm.go:322] - The kubelet is not running
I0108 21:33:13.319205 16187 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 21:33:13.319227 16187 kubeadm.go:322]
I0108 21:33:13.319392 16187 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 21:33:13.319444 16187 kubeadm.go:322] - 'systemctl status kubelet'
I0108 21:33:13.319489 16187 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0108 21:33:13.319508 16187 kubeadm.go:322]
I0108 21:33:13.319634 16187 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 21:33:13.319732 16187 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 21:33:13.319741 16187 kubeadm.go:322]
I0108 21:33:13.319835 16187 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0108 21:33:13.319904 16187 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0108 21:33:13.319962 16187 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0108 21:33:13.319986 16187 kubeadm.go:322] - 'docker logs CONTAINERID'
I0108 21:33:13.319993 16187 kubeadm.go:322]
I0108 21:33:13.321328 16187 kubeadm.go:322] W0109 05:31:15.902425 1704 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 21:33:13.321474 16187 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0108 21:33:13.321546 16187 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 21:33:13.321653 16187 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0108 21:33:13.321740 16187 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:33:13.321852 16187 kubeadm.go:322] W0109 05:31:18.305370 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 21:33:13.321953 16187 kubeadm.go:322] W0109 05:31:18.306105 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 21:33:13.322030 16187 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 21:33:13.322093 16187 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0108 21:33:13.322181 16187 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:31:15.902425 1704 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:31:18.305370 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:31:18.306105 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-136000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:31:15.902425 1704 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:31:18.305370 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:31:18.306105 1704 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0108 21:33:13.322233 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0108 21:33:13.765361 16187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 21:33:13.775616 16187 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0108 21:33:13.775679 16187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 21:33:13.783999 16187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 21:33:13.784025 16187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 21:33:13.835359 16187 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0108 21:33:13.835414 16187 kubeadm.go:322] [preflight] Running pre-flight checks
I0108 21:33:14.102753 16187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 21:33:14.102833 16187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 21:33:14.102915 16187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 21:33:14.274747 16187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 21:33:14.275265 16187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 21:33:14.275299 16187 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0108 21:33:14.354942 16187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 21:33:14.376187 16187 out.go:204] - Generating certificates and keys ...
I0108 21:33:14.376264 16187 kubeadm.go:322] [certs] Using existing ca certificate authority
I0108 21:33:14.376313 16187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0108 21:33:14.376365 16187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0108 21:33:14.376422 16187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0108 21:33:14.376480 16187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0108 21:33:14.376534 16187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0108 21:33:14.376610 16187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0108 21:33:14.376660 16187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0108 21:33:14.376727 16187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0108 21:33:14.376796 16187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0108 21:33:14.376824 16187 kubeadm.go:322] [certs] Using the existing "sa" key
I0108 21:33:14.376867 16187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 21:33:14.402454 16187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 21:33:14.525996 16187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 21:33:14.700539 16187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 21:33:14.841482 16187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 21:33:14.841868 16187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 21:33:14.863144 16187 out.go:204] - Booting up control plane ...
I0108 21:33:14.863235 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 21:33:14.863327 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 21:33:14.863411 16187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 21:33:14.863505 16187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 21:33:14.863677 16187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 21:33:54.851379 16187 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0108 21:33:54.853502 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:33:54.853975 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:33:59.856214 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:33:59.856418 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:34:09.857887 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:34:09.858097 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:34:29.859888 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:34:29.860130 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:35:09.862010 16187 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 21:35:09.862235 16187 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 21:35:09.862250 16187 kubeadm.go:322]
I0108 21:35:09.862307 16187 kubeadm.go:322] Unfortunately, an error has occurred:
I0108 21:35:09.862370 16187 kubeadm.go:322] timed out waiting for the condition
I0108 21:35:09.862387 16187 kubeadm.go:322]
I0108 21:35:09.862421 16187 kubeadm.go:322] This error is likely caused by:
I0108 21:35:09.862472 16187 kubeadm.go:322] - The kubelet is not running
I0108 21:35:09.862611 16187 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 21:35:09.862628 16187 kubeadm.go:322]
I0108 21:35:09.862779 16187 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 21:35:09.862824 16187 kubeadm.go:322] - 'systemctl status kubelet'
I0108 21:35:09.862856 16187 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0108 21:35:09.862865 16187 kubeadm.go:322]
I0108 21:35:09.862967 16187 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 21:35:09.863053 16187 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 21:35:09.863061 16187 kubeadm.go:322]
I0108 21:35:09.863176 16187 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0108 21:35:09.863236 16187 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0108 21:35:09.863317 16187 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0108 21:35:09.863356 16187 kubeadm.go:322] - 'docker logs CONTAINERID'
I0108 21:35:09.863365 16187 kubeadm.go:322]
I0108 21:35:09.865072 16187 kubeadm.go:322] W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 21:35:09.865214 16187 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0108 21:35:09.865278 16187 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 21:35:09.865371 16187 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0108 21:35:09.865453 16187 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 21:35:09.865570 16187 kubeadm.go:322] W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 21:35:09.865721 16187 kubeadm.go:322] W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 21:35:09.865786 16187 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 21:35:09.865848 16187 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0108 21:35:09.865874 16187 kubeadm.go:406] StartCluster complete in 3m54.052199008s
I0108 21:35:09.865956 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0108 21:35:09.884383 16187 logs.go:284] 0 containers: []
W0108 21:35:09.884396 16187 logs.go:286] No container was found matching "kube-apiserver"
I0108 21:35:09.884462 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0108 21:35:09.902172 16187 logs.go:284] 0 containers: []
W0108 21:35:09.902185 16187 logs.go:286] No container was found matching "etcd"
I0108 21:35:09.902260 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0108 21:35:09.919356 16187 logs.go:284] 0 containers: []
W0108 21:35:09.919371 16187 logs.go:286] No container was found matching "coredns"
I0108 21:35:09.919445 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0108 21:35:09.936657 16187 logs.go:284] 0 containers: []
W0108 21:35:09.936671 16187 logs.go:286] No container was found matching "kube-scheduler"
I0108 21:35:09.936737 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0108 21:35:09.955307 16187 logs.go:284] 0 containers: []
W0108 21:35:09.955320 16187 logs.go:286] No container was found matching "kube-proxy"
I0108 21:35:09.955387 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0108 21:35:09.972959 16187 logs.go:284] 0 containers: []
W0108 21:35:09.972973 16187 logs.go:286] No container was found matching "kube-controller-manager"
I0108 21:35:09.973049 16187 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0108 21:35:09.991076 16187 logs.go:284] 0 containers: []
W0108 21:35:09.991090 16187 logs.go:286] No container was found matching "kindnet"
I0108 21:35:09.991097 16187 logs.go:123] Gathering logs for container status ...
I0108 21:35:09.991111 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0108 21:35:10.041142 16187 logs.go:123] Gathering logs for kubelet ...
I0108 21:35:10.041158 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0108 21:35:10.076230 16187 logs.go:123] Gathering logs for dmesg ...
I0108 21:35:10.076266 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0108 21:35:10.088486 16187 logs.go:123] Gathering logs for describe nodes ...
I0108 21:35:10.088499 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0108 21:35:10.140391 16187 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0108 21:35:10.140403 16187 logs.go:123] Gathering logs for Docker ...
I0108 21:35:10.140411 16187 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W0108 21:35:10.155740 16187 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 21:35:10.155762 16187 out.go:239] *
*
W0108 21:35:10.155814 16187 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 21:35:10.155840 16187 out.go:239] *
*
W0108 21:35:10.156426 16187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 21:35:10.218902 16187 out.go:177]
W0108 21:35:10.260739 16187 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 05:33:13.834584 4726 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 05:33:14.846418 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 05:33:14.847336 4726 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 21:35:10.260790 16187 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0108 21:35:10.260807 16187 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0108 21:35:10.281929 16187 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-136000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (262.01s)