=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1107 08:55:46.946322 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:03.112402 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:30.839671 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/addons-084531/client.crt: no such file or directory
E1107 08:58:30.841778 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.848219 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.858936 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.881118 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:30.922066 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.002523 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.164723 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:31.485460 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:32.127532 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:33.408657 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:35.969038 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:41.091551 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
E1107 08:58:51.332018 3267 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/functional-085021/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.413111419s)
-- stdout --
* [ingress-addon-legacy-085453] minikube v1.28.0 on Darwin 13.0
- MINIKUBE_LOCATION=15310
- KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-085453 in cluster ingress-addon-legacy-085453
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I1107 08:54:53.444564 5852 out.go:296] Setting OutFile to fd 1 ...
I1107 08:54:53.444729 5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 08:54:53.444735 5852 out.go:309] Setting ErrFile to fd 2...
I1107 08:54:53.444739 5852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 08:54:53.444846 5852 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15310-2115/.minikube/bin
I1107 08:54:53.445404 5852 out.go:303] Setting JSON to false
I1107 08:54:53.464920 5852 start.go:116] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1468,"bootTime":1667838625,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W1107 08:54:53.465008 5852 start.go:124] gopshost.Virtualization returned error: not implemented yet
I1107 08:54:53.486920 5852 out.go:177] * [ingress-addon-legacy-085453] minikube v1.28.0 on Darwin 13.0
I1107 08:54:53.530004 5852 notify.go:220] Checking for updates...
I1107 08:54:53.551867 5852 out.go:177] - MINIKUBE_LOCATION=15310
I1107 08:54:53.572849 5852 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15310-2115/kubeconfig
I1107 08:54:53.594622 5852 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I1107 08:54:53.616090 5852 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 08:54:53.637827 5852 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15310-2115/.minikube
I1107 08:54:53.660202 5852 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 08:54:53.722250 5852 docker.go:137] docker version: linux-20.10.20
I1107 08:54:53.722406 5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 08:54:53.865180 5852 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 16:54:53.79305721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 08:54:53.908878 5852 out.go:177] * Using the docker driver based on user configuration
I1107 08:54:53.930985 5852 start.go:282] selected driver: docker
I1107 08:54:53.931019 5852 start.go:808] validating driver "docker" against <nil>
I1107 08:54:53.931042 5852 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 08:54:53.934905 5852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 08:54:54.075367 5852 info.go:266] docker info: {ID:QFCO:F62N:MHXK:7ALZ:TX2O:ANBM:VPCR:JFHZ:3RWK:DKG3:X765:OYAT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-07 16:54:53.984806793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 08:54:54.075488 5852 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1107 08:54:54.075626 5852 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1107 08:54:54.097581 5852 out.go:177] * Using Docker Desktop driver with root privileges
I1107 08:54:54.119112 5852 cni.go:95] Creating CNI manager for ""
I1107 08:54:54.119148 5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 08:54:54.119180 5852 start_flags.go:317] config:
{Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 08:54:54.141341 5852 out.go:177] * Starting control plane node ingress-addon-legacy-085453 in cluster ingress-addon-legacy-085453
I1107 08:54:54.184106 5852 cache.go:120] Beginning downloading kic base image for docker with docker
I1107 08:54:54.206289 5852 out.go:177] * Pulling base image ...
I1107 08:54:54.248161 5852 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 08:54:54.248170 5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1107 08:54:54.301010 5852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1107 08:54:54.301039 5852 cache.go:57] Caching tarball of preloaded images
I1107 08:54:54.301235 5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1107 08:54:54.344305 5852 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I1107 08:54:54.366170 5852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1107 08:54:54.368898 5852 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 08:54:54.368921 5852 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 08:54:54.442694 5852 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1107 08:54:58.975377 5852 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1107 08:54:58.975540 5852 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1107 08:54:59.600943 5852 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I1107 08:54:59.601179 5852 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json ...
I1107 08:54:59.601212 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json: {Name:mkd87cb689386a98c42ec5c9221126cd7a0cd281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:54:59.601483 5852 cache.go:208] Successfully downloaded all kic artifacts
I1107 08:54:59.601509 5852 start.go:364] acquiring machines lock for ingress-addon-legacy-085453: {Name:mk63bffbe8a3bd903498e250074e58ae13193d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 08:54:59.601601 5852 start.go:368] acquired machines lock for "ingress-addon-legacy-085453" in 84.374µs
I1107 08:54:59.601627 5852 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 08:54:59.601675 5852 start.go:125] createHost starting for "" (driver="docker")
I1107 08:54:59.623206 5852 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1107 08:54:59.623614 5852 start.go:159] libmachine.API.Create for "ingress-addon-legacy-085453" (driver="docker")
I1107 08:54:59.623660 5852 client.go:168] LocalClient.Create starting
I1107 08:54:59.623857 5852 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem
I1107 08:54:59.623951 5852 main.go:134] libmachine: Decoding PEM data...
I1107 08:54:59.623985 5852 main.go:134] libmachine: Parsing certificate...
I1107 08:54:59.624084 5852 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem
I1107 08:54:59.624154 5852 main.go:134] libmachine: Decoding PEM data...
I1107 08:54:59.624171 5852 main.go:134] libmachine: Parsing certificate...
I1107 08:54:59.645755 5852 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-085453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1107 08:54:59.702275 5852 cli_runner.go:211] docker network inspect ingress-addon-legacy-085453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1107 08:54:59.702397 5852 network_create.go:272] running [docker network inspect ingress-addon-legacy-085453] to gather additional debugging logs...
I1107 08:54:59.702418 5852 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-085453
W1107 08:54:59.756870 5852 cli_runner.go:211] docker network inspect ingress-addon-legacy-085453 returned with exit code 1
I1107 08:54:59.756902 5852 network_create.go:275] error running [docker network inspect ingress-addon-legacy-085453]: docker network inspect ingress-addon-legacy-085453: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-085453
I1107 08:54:59.756924 5852 network_create.go:277] output of [docker network inspect ingress-addon-legacy-085453]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-085453
** /stderr **
I1107 08:54:59.757034 5852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 08:54:59.811673 5852 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000834218] misses:0}
I1107 08:54:59.811714 5852 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1107 08:54:59.811729 5852 network_create.go:115] attempt to create docker network ingress-addon-legacy-085453 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1107 08:54:59.811827 5852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 ingress-addon-legacy-085453
I1107 08:54:59.980214 5852 network_create.go:99] docker network ingress-addon-legacy-085453 192.168.49.0/24 created
I1107 08:54:59.980253 5852 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-085453" container
I1107 08:54:59.980381 5852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1107 08:55:00.035644 5852 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-085453 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --label created_by.minikube.sigs.k8s.io=true
I1107 08:55:00.089720 5852 oci.go:103] Successfully created a docker volume ingress-addon-legacy-085453
I1107 08:55:00.089858 5852 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-085453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --entrypoint /usr/bin/test -v ingress-addon-legacy-085453:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1107 08:55:00.537187 5852 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-085453
I1107 08:55:00.537241 5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1107 08:55:00.537256 5852 kic.go:179] Starting extracting preloaded images to volume ...
I1107 08:55:00.537384 5852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-085453:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1107 08:55:05.232150 5852 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15310-2115/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-085453:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.694615763s)
I1107 08:55:05.232180 5852 kic.go:188] duration metric: took 4.694849 seconds to extract preloaded images to volume
I1107 08:55:05.232315 5852 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1107 08:55:05.372048 5852 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-085453 --name ingress-addon-legacy-085453 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-085453 --network ingress-addon-legacy-085453 --ip 192.168.49.2 --volume ingress-addon-legacy-085453:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1107 08:55:05.723106 5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Running}}
I1107 08:55:05.784816 5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
I1107 08:55:05.847501 5852 cli_runner.go:164] Run: docker exec ingress-addon-legacy-085453 stat /var/lib/dpkg/alternatives/iptables
I1107 08:55:05.964212 5852 oci.go:144] the created container "ingress-addon-legacy-085453" has a running status.
I1107 08:55:05.964241 5852 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa...
I1107 08:55:06.251083 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1107 08:55:06.251163 5852 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1107 08:55:06.348722 5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
I1107 08:55:06.404493 5852 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1107 08:55:06.404509 5852 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-085453 chown docker:docker /home/docker/.ssh/authorized_keys]
I1107 08:55:06.503249 5852 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-085453 --format={{.State.Status}}
I1107 08:55:06.559643 5852 machine.go:88] provisioning docker machine ...
I1107 08:55:06.559684 5852 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-085453"
I1107 08:55:06.559792 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:06.617137 5852 main.go:134] libmachine: Using SSH client type: native
I1107 08:55:06.617338 5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50511 <nil> <nil>}
I1107 08:55:06.617356 5852 main.go:134] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-085453 && echo "ingress-addon-legacy-085453" | sudo tee /etc/hostname
I1107 08:55:06.739845 5852 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-085453
I1107 08:55:06.739968 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:06.796544 5852 main.go:134] libmachine: Using SSH client type: native
I1107 08:55:06.796702 5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50511 <nil> <nil>}
I1107 08:55:06.796725 5852 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-085453' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-085453/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-085453' | sudo tee -a /etc/hosts;
fi
fi
I1107 08:55:06.913010 5852 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 08:55:06.913031 5852 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15310-2115/.minikube CaCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15310-2115/.minikube}
I1107 08:55:06.913051 5852 ubuntu.go:177] setting up certificates
I1107 08:55:06.913060 5852 provision.go:83] configureAuth start
I1107 08:55:06.913161 5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
I1107 08:55:06.969038 5852 provision.go:138] copyHostCerts
I1107 08:55:06.969092 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
I1107 08:55:06.969162 5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem, removing ...
I1107 08:55:06.969170 5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem
I1107 08:55:06.969278 5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/key.pem (1679 bytes)
I1107 08:55:06.969449 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
I1107 08:55:06.969490 5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem, removing ...
I1107 08:55:06.969494 5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem
I1107 08:55:06.969560 5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.pem (1082 bytes)
I1107 08:55:06.969690 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
I1107 08:55:06.969724 5852 exec_runner.go:144] found /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem, removing ...
I1107 08:55:06.969729 5852 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem
I1107 08:55:06.969807 5852 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15310-2115/.minikube/cert.pem (1123 bytes)
I1107 08:55:06.969952 5852 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-085453 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-085453]
I1107 08:55:07.039589 5852 provision.go:172] copyRemoteCerts
I1107 08:55:07.039652 5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 08:55:07.039713 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:07.097248 5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
I1107 08:55:07.182985 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1107 08:55:07.183072 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 08:55:07.199588 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem -> /etc/docker/server.pem
I1107 08:55:07.199672 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I1107 08:55:07.216324 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1107 08:55:07.216408 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1107 08:55:07.232805 5852 provision.go:86] duration metric: configureAuth took 319.72806ms
I1107 08:55:07.232820 5852 ubuntu.go:193] setting minikube options for container-runtime
I1107 08:55:07.232978 5852 config.go:180] Loaded profile config "ingress-addon-legacy-085453": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I1107 08:55:07.233090 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:07.289244 5852 main.go:134] libmachine: Using SSH client type: native
I1107 08:55:07.289403 5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50511 <nil> <nil>}
I1107 08:55:07.289418 5852 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1107 08:55:07.404958 5852 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1107 08:55:07.404975 5852 ubuntu.go:71] root file system type: overlay
I1107 08:55:07.405161 5852 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1107 08:55:07.405267 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:07.463497 5852 main.go:134] libmachine: Using SSH client type: native
I1107 08:55:07.463660 5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50511 <nil> <nil>}
I1107 08:55:07.463709 5852 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1107 08:55:07.594714 5852 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1107 08:55:07.594834 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:07.651340 5852 main.go:134] libmachine: Using SSH client type: native
I1107 08:55:07.651497 5852 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6c00] 0x13e9d80 <nil> [] 0s} 127.0.0.1 50511 <nil> <nil>}
I1107 08:55:07.651512 5852 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1107 08:55:08.229223 5852 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-07 16:55:07.602246042 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1107 08:55:08.229247 5852 machine.go:91] provisioned docker machine in 1.669558122s
I1107 08:55:08.229254 5852 client.go:171] LocalClient.Create took 8.605450332s
I1107 08:55:08.229275 5852 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-085453" took 8.605525265s
I1107 08:55:08.229286 5852 start.go:300] post-start starting for "ingress-addon-legacy-085453" (driver="docker")
I1107 08:55:08.229290 5852 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 08:55:08.229365 5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 08:55:08.229439 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:08.286959 5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
I1107 08:55:08.375817 5852 ssh_runner.go:195] Run: cat /etc/os-release
I1107 08:55:08.379457 5852 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 08:55:08.379475 5852 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 08:55:08.379485 5852 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 08:55:08.379490 5852 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 08:55:08.379511 5852 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/addons for local assets ...
I1107 08:55:08.379613 5852 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15310-2115/.minikube/files for local assets ...
I1107 08:55:08.379795 5852 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> 32672.pem in /etc/ssl/certs
I1107 08:55:08.379802 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /etc/ssl/certs/32672.pem
I1107 08:55:08.380013 5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 08:55:08.387209 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /etc/ssl/certs/32672.pem (1708 bytes)
I1107 08:55:08.404198 5852 start.go:303] post-start completed in 174.896092ms
I1107 08:55:08.404761 5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
I1107 08:55:08.478220 5852 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/config.json ...
I1107 08:55:08.478685 5852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 08:55:08.478764 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:08.535353 5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
I1107 08:55:08.617924 5852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 08:55:08.622420 5852 start.go:128] duration metric: createHost completed in 9.020590193s
I1107 08:55:08.622439 5852 start.go:83] releasing machines lock for "ingress-addon-legacy-085453", held for 9.020686019s
I1107 08:55:08.622543 5852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-085453
I1107 08:55:08.678128 5852 ssh_runner.go:195] Run: systemctl --version
I1107 08:55:08.678130 5852 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I1107 08:55:08.678210 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:08.678221 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:08.739156 5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
I1107 08:55:08.739776 5852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50511 SSHKeyPath:/Users/jenkins/minikube-integration/15310-2115/.minikube/machines/ingress-addon-legacy-085453/id_rsa Username:docker}
I1107 08:55:09.070766 5852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1107 08:55:09.080798 5852 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1107 08:55:09.080881 5852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 08:55:09.089857 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1107 08:55:09.102237 5852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1107 08:55:09.170415 5852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1107 08:55:09.233769 5852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 08:55:09.298241 5852 ssh_runner.go:195] Run: sudo systemctl restart docker
I1107 08:55:09.503525 5852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 08:55:09.533469 5852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 08:55:09.603684 5852 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
I1107 08:55:09.603912 5852 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-085453 dig +short host.docker.internal
I1107 08:55:09.722681 5852 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I1107 08:55:09.722795 5852 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I1107 08:55:09.727054 5852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 08:55:09.737128 5852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-085453
I1107 08:55:09.793842 5852 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1107 08:55:09.793932 5852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 08:55:09.817255 5852 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1107 08:55:09.817274 5852 docker.go:543] Images already preloaded, skipping extraction
I1107 08:55:09.817364 5852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 08:55:09.839726 5852 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1107 08:55:09.839750 5852 cache_images.go:84] Images are preloaded, skipping loading
I1107 08:55:09.839864 5852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1107 08:55:09.906384 5852 cni.go:95] Creating CNI manager for ""
I1107 08:55:09.906398 5852 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 08:55:09.906414 5852 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 08:55:09.906434 5852 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-085453 NodeName:ingress-addon-legacy-085453 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 08:55:09.906563 5852 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-085453"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 08:55:09.906650 5852 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-085453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1107 08:55:09.906732 5852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I1107 08:55:09.914375 5852 binaries.go:44] Found k8s binaries, skipping transfer
I1107 08:55:09.914436 5852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 08:55:09.921428 5852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I1107 08:55:09.933812 5852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I1107 08:55:09.946127 5852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I1107 08:55:09.958577 5852 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1107 08:55:09.962176 5852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 08:55:09.971547 5852 certs.go:54] Setting up /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453 for IP: 192.168.49.2
I1107 08:55:09.971684 5852 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key
I1107 08:55:09.971759 5852 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key
I1107 08:55:09.971816 5852 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key
I1107 08:55:09.971836 5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt with IP's: []
I1107 08:55:10.380145 5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt ...
I1107 08:55:10.380162 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.crt: {Name:mk942764529c7e206d68dbdd491c39f2f3870744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.380468 5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key ...
I1107 08:55:10.380477 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/client.key: {Name:mkd0ada144a28bdd30dbfe767b0b675765b4b996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.380715 5852 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2
I1107 08:55:10.380735 5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1107 08:55:10.468199 5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 ...
I1107 08:55:10.468207 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2: {Name:mk56c82efb76b2092397c0435a922be028cad462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.468504 5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2 ...
I1107 08:55:10.468512 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2: {Name:mk4985429ae2d8822661c88f53b10c5e2aaa43a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.468736 5852 certs.go:320] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt
I1107 08:55:10.468891 5852 certs.go:324] copying /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key
I1107 08:55:10.469052 5852 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key
I1107 08:55:10.469069 5852 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt with IP's: []
I1107 08:55:10.557531 5852 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt ...
I1107 08:55:10.557542 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt: {Name:mk4f5c9510e2e83eb58e6ef1560e201884dd0ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.557805 5852 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key ...
I1107 08:55:10.557818 5852 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key: {Name:mk38f57a18f2d01999f61a0945dd3f6ad55b5f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 08:55:10.558022 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1107 08:55:10.558064 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1107 08:55:10.558090 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1107 08:55:10.558114 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1107 08:55:10.558138 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1107 08:55:10.558162 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1107 08:55:10.558184 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1107 08:55:10.558205 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1107 08:55:10.558299 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem (1338 bytes)
W1107 08:55:10.558350 5852 certs.go:384] ignoring /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267_empty.pem, impossibly tiny 0 bytes
I1107 08:55:10.558365 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca-key.pem (1679 bytes)
I1107 08:55:10.558405 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/ca.pem (1082 bytes)
I1107 08:55:10.558438 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/cert.pem (1123 bytes)
I1107 08:55:10.558479 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/certs/key.pem (1679 bytes)
I1107 08:55:10.558556 5852 certs.go:388] found cert: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem (1708 bytes)
I1107 08:55:10.558617 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem -> /usr/share/ca-certificates/3267.pem
I1107 08:55:10.558642 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem -> /usr/share/ca-certificates/32672.pem
I1107 08:55:10.558664 5852 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1107 08:55:10.559128 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 08:55:10.576623 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1107 08:55:10.592951 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 08:55:10.609703 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/profiles/ingress-addon-legacy-085453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1107 08:55:10.626752 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 08:55:10.643004 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1107 08:55:10.659869 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 08:55:10.676464 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1107 08:55:10.693044 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/certs/3267.pem --> /usr/share/ca-certificates/3267.pem (1338 bytes)
I1107 08:55:10.709783 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/files/etc/ssl/certs/32672.pem --> /usr/share/ca-certificates/32672.pem (1708 bytes)
I1107 08:55:10.726516 5852 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15310-2115/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 08:55:10.743384 5852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 08:55:10.755827 5852 ssh_runner.go:195] Run: openssl version
I1107 08:55:10.761243 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3267.pem && ln -fs /usr/share/ca-certificates/3267.pem /etc/ssl/certs/3267.pem"
I1107 08:55:10.768705 5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3267.pem
I1107 08:55:10.772466 5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/3267.pem
I1107 08:55:10.772519 5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3267.pem
I1107 08:55:10.777794 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3267.pem /etc/ssl/certs/51391683.0"
I1107 08:55:10.785524 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/32672.pem && ln -fs /usr/share/ca-certificates/32672.pem /etc/ssl/certs/32672.pem"
I1107 08:55:10.793113 5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/32672.pem
I1107 08:55:10.796923 5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/32672.pem
I1107 08:55:10.796981 5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/32672.pem
I1107 08:55:10.802084 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/32672.pem /etc/ssl/certs/3ec20f2e.0"
I1107 08:55:10.809693 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 08:55:10.817353 5852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 08:55:10.821029 5852 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 08:55:10.821087 5852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 08:55:10.826285 5852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 08:55:10.834031 5852 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-085453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-085453 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 08:55:10.834172 5852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1107 08:55:10.856023 5852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 08:55:10.865314 5852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 08:55:10.872667 5852 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 08:55:10.872727 5852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 08:55:10.880081 5852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 08:55:10.880105 5852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 08:55:10.924891 5852 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1107 08:55:10.925247 5852 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 08:55:11.209867 5852 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1107 08:55:11.209960 5852 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1107 08:55:11.210041 5852 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1107 08:55:11.427340 5852 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1107 08:55:11.427846 5852 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1107 08:55:11.427877 5852 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1107 08:55:11.499836 5852 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1107 08:55:11.522711 5852 out.go:204] - Generating certificates and keys ...
I1107 08:55:11.522794 5852 kubeadm.go:317] [certs] Using existing ca certificate authority
I1107 08:55:11.522855 5852 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1107 08:55:11.733243 5852 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1107 08:55:11.901032 5852 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1107 08:55:12.081129 5852 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1107 08:55:12.151548 5852 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1107 08:55:12.267363 5852 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1107 08:55:12.267505 5852 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1107 08:55:12.550254 5852 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1107 08:55:12.550409 5852 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1107 08:55:12.708530 5852 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1107 08:55:13.183196 5852 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1107 08:55:13.256719 5852 kubeadm.go:317] [certs] Generating "sa" key and public key
I1107 08:55:13.256823 5852 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1107 08:55:13.394360 5852 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1107 08:55:13.599991 5852 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1107 08:55:13.693029 5852 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1107 08:55:13.779645 5852 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1107 08:55:13.780395 5852 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1107 08:55:13.823934 5852 out.go:204] - Booting up control plane ...
I1107 08:55:13.824122 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1107 08:55:13.824288 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1107 08:55:13.824417 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1107 08:55:13.824554 5852 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1107 08:55:13.824836 5852 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1107 08:55:53.760814 5852 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1107 08:55:53.761335 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:55:53.761472 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:55:58.759856 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:55:58.760079 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:56:08.754431 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:56:08.754636 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:56:28.747116 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:56:28.747424 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:57:08.747066 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:57:08.747605 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:57:08.747620 5852 kubeadm.go:317]
I1107 08:57:08.747798 5852 kubeadm.go:317] Unfortunately, an error has occurred:
I1107 08:57:08.747949 5852 kubeadm.go:317] timed out waiting for the condition
I1107 08:57:08.747966 5852 kubeadm.go:317]
I1107 08:57:08.748022 5852 kubeadm.go:317] This error is likely caused by:
I1107 08:57:08.748100 5852 kubeadm.go:317] - The kubelet is not running
I1107 08:57:08.748295 5852 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1107 08:57:08.748311 5852 kubeadm.go:317]
I1107 08:57:08.748439 5852 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1107 08:57:08.748475 5852 kubeadm.go:317] - 'systemctl status kubelet'
I1107 08:57:08.748513 5852 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1107 08:57:08.748519 5852 kubeadm.go:317]
I1107 08:57:08.748637 5852 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1107 08:57:08.748746 5852 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1107 08:57:08.748754 5852 kubeadm.go:317]
I1107 08:57:08.748828 5852 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1107 08:57:08.748891 5852 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1107 08:57:08.748996 5852 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1107 08:57:08.749024 5852 kubeadm.go:317] - 'docker logs CONTAINERID'
I1107 08:57:08.749031 5852 kubeadm.go:317]
I1107 08:57:08.752746 5852 kubeadm.go:317] W1107 16:55:10.936244 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1107 08:57:08.752824 5852 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1107 08:57:08.752943 5852 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1107 08:57:08.753027 5852 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 08:57:08.753121 5852 kubeadm.go:317] W1107 16:55:13.798255 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1107 08:57:08.753216 5852 kubeadm.go:317] W1107 16:55:13.799084 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1107 08:57:08.753280 5852 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1107 08:57:08.753339 5852 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W1107 08:57:08.753533 5852 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:55:10.936244 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:55:13.798255 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:55:13.799084 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-085453 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:55:10.936244 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:55:13.798255 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:55:13.799084 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I1107 08:57:08.753565 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1107 08:57:09.165819 5852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 08:57:09.175219 5852 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 08:57:09.175285 5852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 08:57:09.182909 5852 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 08:57:09.182942 5852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 08:57:09.230127 5852 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I1107 08:57:09.230176 5852 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 08:57:09.515797 5852 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1107 08:57:09.515886 5852 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1107 08:57:09.515984 5852 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1107 08:57:09.728894 5852 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1107 08:57:09.729719 5852 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1107 08:57:09.729785 5852 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1107 08:57:09.802358 5852 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1107 08:57:09.823997 5852 out.go:204] - Generating certificates and keys ...
I1107 08:57:09.824069 5852 kubeadm.go:317] [certs] Using existing ca certificate authority
I1107 08:57:09.824129 5852 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1107 08:57:09.824213 5852 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1107 08:57:09.824282 5852 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I1107 08:57:09.824363 5852 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I1107 08:57:09.824426 5852 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I1107 08:57:09.824497 5852 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I1107 08:57:09.824541 5852 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I1107 08:57:09.824622 5852 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1107 08:57:09.824692 5852 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1107 08:57:09.824725 5852 kubeadm.go:317] [certs] Using the existing "sa" key
I1107 08:57:09.824764 5852 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1107 08:57:10.045059 5852 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1107 08:57:10.113668 5852 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1107 08:57:10.230092 5852 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1107 08:57:10.313926 5852 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1107 08:57:10.314534 5852 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1107 08:57:10.336219 5852 out.go:204] - Booting up control plane ...
I1107 08:57:10.336402 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1107 08:57:10.336522 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1107 08:57:10.336646 5852 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1107 08:57:10.336764 5852 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1107 08:57:10.336995 5852 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1107 08:57:50.314839 5852 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I1107 08:57:50.315488 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:57:50.315642 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:57:55.313117 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:57:55.313282 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:58:05.308726 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:58:05.308963 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:58:25.295449 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:58:25.295609 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:59:05.270610 5852 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1107 08:59:05.270906 5852 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1107 08:59:05.270924 5852 kubeadm.go:317]
I1107 08:59:05.270975 5852 kubeadm.go:317] Unfortunately, an error has occurred:
I1107 08:59:05.271053 5852 kubeadm.go:317] timed out waiting for the condition
I1107 08:59:05.271070 5852 kubeadm.go:317]
I1107 08:59:05.271106 5852 kubeadm.go:317] This error is likely caused by:
I1107 08:59:05.271147 5852 kubeadm.go:317] - The kubelet is not running
I1107 08:59:05.271261 5852 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1107 08:59:05.271278 5852 kubeadm.go:317]
I1107 08:59:05.271383 5852 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1107 08:59:05.271424 5852 kubeadm.go:317] - 'systemctl status kubelet'
I1107 08:59:05.271459 5852 kubeadm.go:317] - 'journalctl -xeu kubelet'
I1107 08:59:05.271464 5852 kubeadm.go:317]
I1107 08:59:05.271566 5852 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1107 08:59:05.271666 5852 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1107 08:59:05.271678 5852 kubeadm.go:317]
I1107 08:59:05.271803 5852 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I1107 08:59:05.271856 5852 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I1107 08:59:05.271941 5852 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I1107 08:59:05.271975 5852 kubeadm.go:317] - 'docker logs CONTAINERID'
I1107 08:59:05.271983 5852 kubeadm.go:317]
I1107 08:59:05.274658 5852 kubeadm.go:317] W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1107 08:59:05.274725 5852 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1107 08:59:05.274860 5852 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
I1107 08:59:05.274960 5852 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1107 08:59:05.275057 5852 kubeadm.go:317] W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1107 08:59:05.275156 5852 kubeadm.go:317] W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1107 08:59:05.275234 5852 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1107 08:59:05.275289 5852 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I1107 08:59:05.275309 5852 kubeadm.go:398] StartCluster complete in 3m54.386737293s
I1107 08:59:05.275404 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 08:59:05.297063 5852 logs.go:274] 0 containers: []
W1107 08:59:05.297076 5852 logs.go:276] No container was found matching "kube-apiserver"
I1107 08:59:05.297165 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 08:59:05.318062 5852 logs.go:274] 0 containers: []
W1107 08:59:05.318074 5852 logs.go:276] No container was found matching "etcd"
I1107 08:59:05.318158 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 08:59:05.339967 5852 logs.go:274] 0 containers: []
W1107 08:59:05.339979 5852 logs.go:276] No container was found matching "coredns"
I1107 08:59:05.340061 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 08:59:05.361060 5852 logs.go:274] 0 containers: []
W1107 08:59:05.361072 5852 logs.go:276] No container was found matching "kube-scheduler"
I1107 08:59:05.361159 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 08:59:05.383943 5852 logs.go:274] 0 containers: []
W1107 08:59:05.383954 5852 logs.go:276] No container was found matching "kube-proxy"
I1107 08:59:05.384039 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 08:59:05.405732 5852 logs.go:274] 0 containers: []
W1107 08:59:05.405745 5852 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 08:59:05.405825 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 08:59:05.426879 5852 logs.go:274] 0 containers: []
W1107 08:59:05.426890 5852 logs.go:276] No container was found matching "storage-provisioner"
I1107 08:59:05.426983 5852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 08:59:05.448113 5852 logs.go:274] 0 containers: []
W1107 08:59:05.448126 5852 logs.go:276] No container was found matching "kube-controller-manager"
I1107 08:59:05.448133 5852 logs.go:123] Gathering logs for dmesg ...
I1107 08:59:05.448140 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 08:59:05.459904 5852 logs.go:123] Gathering logs for describe nodes ...
I1107 08:59:05.459916 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 08:59:05.511297 5852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 08:59:05.511308 5852 logs.go:123] Gathering logs for Docker ...
I1107 08:59:05.511314 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 08:59:05.526573 5852 logs.go:123] Gathering logs for container status ...
I1107 08:59:05.526585 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 08:59:07.578517 5852 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051866863s)
I1107 08:59:07.578628 5852 logs.go:123] Gathering logs for kubelet ...
I1107 08:59:07.578634 5852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1107 08:59:07.616682 5852 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1107 08:59:07.616706 5852 out.go:239] *
*
W1107 08:59:07.616826 5852 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1107 08:59:07.616850 5852 out.go:239] *
*
W1107 08:59:07.617502 5852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1107 08:59:07.682267 5852 out.go:177]
W1107 08:59:07.726668 5852 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1107 16:57:09.212600 3465 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1107 16:57:10.300670 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1107 16:57:10.301471 3465 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1107 08:59:07.726730 5852 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1107 08:59:07.726774 5852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1107 08:59:07.775798 5852 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-085453 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.44s)