=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0108 12:37:43.401167 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:39:59.548434 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:40:16.833659 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.839114 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.849503 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.871656 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.912564 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:16.993007 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:17.155071 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:17.475476 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:18.117288 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:19.397763 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:21.958226 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:27.079027 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:27.239677 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/addons-122726/client.crt: no such file or directory
E0108 12:40:37.319210 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
E0108 12:40:57.799599 4083 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/functional-123245/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m18.412502499s)
-- stdout --
* [ingress-addon-legacy-123658] minikube v1.28.0 on Darwin 13.0.1
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-123658 in cluster ingress-addon-legacy-123658
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0108 12:36:58.222386 6872 out.go:296] Setting OutFile to fd 1 ...
I0108 12:36:58.222560 6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 12:36:58.222566 6872 out.go:309] Setting ErrFile to fd 2...
I0108 12:36:58.222570 6872 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 12:36:58.222692 6872 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-2761/.minikube/bin
I0108 12:36:58.223233 6872 out.go:303] Setting JSON to false
I0108 12:36:58.241780 6872 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2191,"bootTime":1673208027,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0108 12:36:58.241879 6872 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0108 12:36:58.263595 6872 out.go:177] * [ingress-addon-legacy-123658] minikube v1.28.0 on Darwin 13.0.1
I0108 12:36:58.305536 6872 notify.go:220] Checking for updates...
I0108 12:36:58.327527 6872 out.go:177] - MINIKUBE_LOCATION=15565
I0108 12:36:58.348544 6872 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-2761/kubeconfig
I0108 12:36:58.370489 6872 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 12:36:58.391731 6872 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 12:36:58.450400 6872 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-2761/.minikube
I0108 12:36:58.474012 6872 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 12:36:58.535470 6872 docker.go:137] docker version: linux-20.10.21
I0108 12:36:58.535623 6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 12:36:58.674862 6872 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:36:58.584048081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0108 12:36:58.718631 6872 out.go:177] * Using the docker driver based on user configuration
I0108 12:36:58.740499 6872 start.go:294] selected driver: docker
I0108 12:36:58.740529 6872 start.go:838] validating driver "docker" against <nil>
I0108 12:36:58.740554 6872 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 12:36:58.744435 6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 12:36:58.885073 6872 info.go:266] docker info: {ID:5ZGC:6Z7C:CMVQ:QZDS:ZWTS:M4CQ:373D:RAJX:R4QB:QI3D:27UQ:FR5B Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-08 20:36:58.794100374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0108 12:36:58.885190 6872 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I0108 12:36:58.885348 6872 start_flags.go:910] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 12:36:58.907224 6872 out.go:177] * Using Docker Desktop driver with root privileges
I0108 12:36:58.928947 6872 cni.go:95] Creating CNI manager for ""
I0108 12:36:58.929010 6872 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 12:36:58.929025 6872 start_flags.go:317] config:
{Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 12:36:58.950874 6872 out.go:177] * Starting control plane node ingress-addon-legacy-123658 in cluster ingress-addon-legacy-123658
I0108 12:36:58.972042 6872 cache.go:120] Beginning downloading kic base image for docker with docker
I0108 12:36:58.994019 6872 out.go:177] * Pulling base image ...
I0108 12:36:59.037011 6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 12:36:59.037056 6872 image.go:76] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0108 12:36:59.094014 6872 image.go:80] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0108 12:36:59.094040 6872 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0108 12:36:59.144172 6872 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 12:36:59.144211 6872 cache.go:57] Caching tarball of preloaded images
I0108 12:36:59.144649 6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 12:36:59.188108 6872 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0108 12:36:59.209518 6872 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 12:36:59.438706 6872 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 12:37:07.137184 6872 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 12:37:07.137372 6872 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 12:37:07.751960 6872 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0108 12:37:07.752218 6872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json ...
I0108 12:37:07.752250 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json: {Name:mk13145cfd20d96138dbac72623c70117000dca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:07.752634 6872 cache.go:193] Successfully downloaded all kic artifacts
I0108 12:37:07.752662 6872 start.go:364] acquiring machines lock for ingress-addon-legacy-123658: {Name:mka9a351a5744740a5234f841f3cecbaf2564f33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 12:37:07.752838 6872 start.go:368] acquired machines lock for "ingress-addon-legacy-123658" in 169.088µs
I0108 12:37:07.752864 6872 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 12:37:07.753007 6872 start.go:125] createHost starting for "" (driver="docker")
I0108 12:37:07.805795 6872 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0108 12:37:07.806122 6872 start.go:159] libmachine.API.Create for "ingress-addon-legacy-123658" (driver="docker")
I0108 12:37:07.806168 6872 client.go:168] LocalClient.Create starting
I0108 12:37:07.806391 6872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem
I0108 12:37:07.806489 6872 main.go:134] libmachine: Decoding PEM data...
I0108 12:37:07.806527 6872 main.go:134] libmachine: Parsing certificate...
I0108 12:37:07.806615 6872 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem
I0108 12:37:07.806681 6872 main.go:134] libmachine: Decoding PEM data...
I0108 12:37:07.806698 6872 main.go:134] libmachine: Parsing certificate...
I0108 12:37:07.807575 6872 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0108 12:37:07.865230 6872 cli_runner.go:211] docker network inspect ingress-addon-legacy-123658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0108 12:37:07.865344 6872 network_create.go:272] running [docker network inspect ingress-addon-legacy-123658] to gather additional debugging logs...
I0108 12:37:07.865366 6872 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-123658
W0108 12:37:07.919462 6872 cli_runner.go:211] docker network inspect ingress-addon-legacy-123658 returned with exit code 1
I0108 12:37:07.919493 6872 network_create.go:275] error running [docker network inspect ingress-addon-legacy-123658]: docker network inspect ingress-addon-legacy-123658: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-123658
I0108 12:37:07.919522 6872 network_create.go:277] output of [docker network inspect ingress-addon-legacy-123658]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-123658
** /stderr **
I0108 12:37:07.919635 6872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 12:37:07.974969 6872 network.go:306] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00011d8c8] misses:0}
I0108 12:37:07.975007 6872 network.go:239] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0108 12:37:07.975023 6872 network_create.go:115] attempt to create docker network ingress-addon-legacy-123658 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0108 12:37:07.975123 6872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 ingress-addon-legacy-123658
I0108 12:37:08.068024 6872 network_create.go:99] docker network ingress-addon-legacy-123658 192.168.49.0/24 created
I0108 12:37:08.068067 6872 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-123658" container
I0108 12:37:08.068210 6872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0108 12:37:08.122354 6872 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-123658 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --label created_by.minikube.sigs.k8s.io=true
I0108 12:37:08.176217 6872 oci.go:103] Successfully created a docker volume ingress-addon-legacy-123658
I0108 12:37:08.176358 6872 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-123658-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --entrypoint /usr/bin/test -v ingress-addon-legacy-123658:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
I0108 12:37:08.618945 6872 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-123658
I0108 12:37:08.618984 6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 12:37:08.619000 6872 kic.go:179] Starting extracting preloaded images to volume ...
I0108 12:37:08.619129 6872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
I0108 12:37:14.637774 6872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-2761/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-123658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.018612761s)
I0108 12:37:14.637800 6872 kic.go:188] duration metric: took 6.018871 seconds to extract preloaded images to volume
I0108 12:37:14.637938 6872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0108 12:37:14.781648 6872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-123658 --name ingress-addon-legacy-123658 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-123658 --network ingress-addon-legacy-123658 --ip 192.168.49.2 --volume ingress-addon-legacy-123658:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
I0108 12:37:15.129060 6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Running}}
I0108 12:37:15.190224 6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
I0108 12:37:15.253766 6872 cli_runner.go:164] Run: docker exec ingress-addon-legacy-123658 stat /var/lib/dpkg/alternatives/iptables
I0108 12:37:15.368111 6872 oci.go:144] the created container "ingress-addon-legacy-123658" has a running status.
I0108 12:37:15.368154 6872 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa...
I0108 12:37:15.450111 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0108 12:37:15.450208 6872 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0108 12:37:15.560982 6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
I0108 12:37:15.619501 6872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0108 12:37:15.619520 6872 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-123658 chown docker:docker /home/docker/.ssh/authorized_keys]
I0108 12:37:15.724202 6872 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-123658 --format={{.State.Status}}
I0108 12:37:15.782063 6872 machine.go:88] provisioning docker machine ...
I0108 12:37:15.782107 6872 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-123658"
I0108 12:37:15.782218 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:15.839847 6872 main.go:134] libmachine: Using SSH client type: native
I0108 12:37:15.840049 6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50561 <nil> <nil>}
I0108 12:37:15.840064 6872 main.go:134] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-123658 && echo "ingress-addon-legacy-123658" | sudo tee /etc/hostname
I0108 12:37:15.967595 6872 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-123658
I0108 12:37:15.967704 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:16.025641 6872 main.go:134] libmachine: Using SSH client type: native
I0108 12:37:16.025816 6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50561 <nil> <nil>}
I0108 12:37:16.025832 6872 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-123658' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-123658/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-123658' | sudo tee -a /etc/hosts;
fi
fi
I0108 12:37:16.146397 6872 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 12:37:16.146419 6872 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-2761/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-2761/.minikube}
I0108 12:37:16.146438 6872 ubuntu.go:177] setting up certificates
I0108 12:37:16.146446 6872 provision.go:83] configureAuth start
I0108 12:37:16.146541 6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
I0108 12:37:16.204001 6872 provision.go:138] copyHostCerts
I0108 12:37:16.204064 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
I0108 12:37:16.204124 6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem, removing ...
I0108 12:37:16.204131 6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem
I0108 12:37:16.204253 6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.pem (1082 bytes)
I0108 12:37:16.204428 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
I0108 12:37:16.204473 6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem, removing ...
I0108 12:37:16.204478 6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem
I0108 12:37:16.204549 6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/cert.pem (1123 bytes)
I0108 12:37:16.204684 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
I0108 12:37:16.204726 6872 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem, removing ...
I0108 12:37:16.204730 6872 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem
I0108 12:37:16.204798 6872 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-2761/.minikube/key.pem (1675 bytes)
I0108 12:37:16.204925 6872 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-123658 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-123658]
I0108 12:37:16.312882 6872 provision.go:172] copyRemoteCerts
I0108 12:37:16.312942 6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 12:37:16.313005 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:16.370128 6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
I0108 12:37:16.455432 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem -> /etc/docker/server.pem
I0108 12:37:16.455534 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0108 12:37:16.472398 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0108 12:37:16.472480 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 12:37:16.490255 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0108 12:37:16.490337 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0108 12:37:16.507438 6872 provision.go:86] duration metric: configureAuth took 360.984333ms
I0108 12:37:16.507453 6872 ubuntu.go:193] setting minikube options for container-runtime
I0108 12:37:16.507612 6872 config.go:180] Loaded profile config "ingress-addon-legacy-123658": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0108 12:37:16.507688 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:16.565205 6872 main.go:134] libmachine: Using SSH client type: native
I0108 12:37:16.565364 6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50561 <nil> <nil>}
I0108 12:37:16.565376 6872 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 12:37:16.682349 6872 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0108 12:37:16.682367 6872 ubuntu.go:71] root file system type: overlay
I0108 12:37:16.682506 6872 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 12:37:16.682599 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:16.740597 6872 main.go:134] libmachine: Using SSH client type: native
I0108 12:37:16.740762 6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50561 <nil> <nil>}
I0108 12:37:16.740813 6872 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 12:37:16.868535 6872 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 12:37:16.868665 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:16.927877 6872 main.go:134] libmachine: Using SSH client type: native
I0108 12:37:16.928050 6872 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50561 <nil> <nil>}
I0108 12:37:16.928064 6872 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 12:37:17.515872 6872 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-25 18:00:04.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-08 20:37:16.866135097 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0108 12:37:17.515910 6872 machine.go:91] provisioned docker machine in 1.73384737s
I0108 12:37:17.515930 6872 client.go:171] LocalClient.Create took 9.709870207s
I0108 12:37:17.515947 6872 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-123658" took 9.709942799s
I0108 12:37:17.515958 6872 start.go:300] post-start starting for "ingress-addon-legacy-123658" (driver="docker")
I0108 12:37:17.515966 6872 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 12:37:17.516093 6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 12:37:17.516214 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:17.575304 6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
I0108 12:37:17.663004 6872 ssh_runner.go:195] Run: cat /etc/os-release
I0108 12:37:17.666526 6872 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 12:37:17.666543 6872 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 12:37:17.666556 6872 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 12:37:17.666562 6872 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0108 12:37:17.666572 6872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/addons for local assets ...
I0108 12:37:17.666664 6872 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-2761/.minikube/files for local assets ...
I0108 12:37:17.666847 6872 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> 40832.pem in /etc/ssl/certs
I0108 12:37:17.666853 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /etc/ssl/certs/40832.pem
I0108 12:37:17.667078 6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 12:37:17.674461 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /etc/ssl/certs/40832.pem (1708 bytes)
I0108 12:37:17.691641 6872 start.go:303] post-start completed in 175.675724ms
I0108 12:37:17.692226 6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
I0108 12:37:17.750933 6872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/config.json ...
I0108 12:37:17.751381 6872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 12:37:17.751465 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:17.807814 6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
I0108 12:37:17.892935 6872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 12:37:17.897352 6872 start.go:128] duration metric: createHost completed in 10.144458228s
I0108 12:37:17.897369 6872 start.go:83] releasing machines lock for "ingress-addon-legacy-123658", held for 10.144642002s
I0108 12:37:17.897470 6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-123658
I0108 12:37:17.954806 6872 ssh_runner.go:195] Run: cat /version.json
I0108 12:37:17.954834 6872 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0108 12:37:17.954893 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:17.954917 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:18.018781 6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
I0108 12:37:18.018907 6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50561 SSHKeyPath:/Users/jenkins/minikube-integration/15565-2761/.minikube/machines/ingress-addon-legacy-123658/id_rsa Username:docker}
I0108 12:37:18.362152 6872 ssh_runner.go:195] Run: systemctl --version
I0108 12:37:18.367033 6872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 12:37:18.376839 6872 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0108 12:37:18.376904 6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 12:37:18.386530 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0108 12:37:18.399469 6872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 12:37:18.469692 6872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 12:37:18.541073 6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 12:37:18.610109 6872 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 12:37:18.822363 6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 12:37:18.853711 6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 12:37:18.930903 6872 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
I0108 12:37:18.931149 6872 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-123658 dig +short host.docker.internal
I0108 12:37:19.046619 6872 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0108 12:37:19.046738 6872 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0108 12:37:19.051083 6872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 12:37:19.061084 6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-123658
I0108 12:37:19.120796 6872 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 12:37:19.120900 6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 12:37:19.144666 6872 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 12:37:19.144685 6872 docker.go:543] Images already preloaded, skipping extraction
I0108 12:37:19.144774 6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 12:37:19.170889 6872 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 12:37:19.170920 6872 cache_images.go:84] Images are preloaded, skipping loading
I0108 12:37:19.171022 6872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 12:37:19.239917 6872 cni.go:95] Creating CNI manager for ""
I0108 12:37:19.239936 6872 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 12:37:19.239964 6872 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 12:37:19.239980 6872 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-123658 NodeName:ingress-addon-legacy-123658 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0108 12:37:19.240123 6872 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-123658"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 12:37:19.240211 6872 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-123658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 12:37:19.240286 6872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0108 12:37:19.248146 6872 binaries.go:44] Found k8s binaries, skipping transfer
I0108 12:37:19.248215 6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 12:37:19.255696 6872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0108 12:37:19.268812 6872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0108 12:37:19.281890 6872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I0108 12:37:19.294893 6872 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0108 12:37:19.298825 6872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 12:37:19.308798 6872 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658 for IP: 192.168.49.2
I0108 12:37:19.308960 6872 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key
I0108 12:37:19.309040 6872 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key
I0108 12:37:19.309090 6872 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key
I0108 12:37:19.309108 6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt with IP's: []
I0108 12:37:19.445343 6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt ...
I0108 12:37:19.445355 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.crt: {Name:mk84f7860d5c3b6cc55150059aadf2f55a36fd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.445740 6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key ...
I0108 12:37:19.445748 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/client.key: {Name:mka89f32d4824ab11494b2ccc762c8d45e2a2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.445964 6872 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2
I0108 12:37:19.445982 6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0108 12:37:19.519320 6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 ...
I0108 12:37:19.519328 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2: {Name:mka63b112e800d0a58356444d154c62037b034b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.519556 6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2 ...
I0108 12:37:19.519563 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2: {Name:mk885f55cf4c1efcb0608b93715a9b7a860b54ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.519748 6872 certs.go:320] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt
I0108 12:37:19.519920 6872 certs.go:324] copying /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key
I0108 12:37:19.520105 6872 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key
I0108 12:37:19.520124 6872 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt with IP's: []
I0108 12:37:19.662437 6872 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt ...
I0108 12:37:19.662446 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt: {Name:mk2d2c053a1ce2e9a514e94c944bde5fd264199d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.662729 6872 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key ...
I0108 12:37:19.662737 6872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key: {Name:mk683f0f8a703c2f5ba7127ed4fd24655f2d9618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 12:37:19.662924 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0108 12:37:19.662956 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0108 12:37:19.662983 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0108 12:37:19.663006 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0108 12:37:19.663029 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0108 12:37:19.663050 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0108 12:37:19.663069 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0108 12:37:19.663089 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0108 12:37:19.663195 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem (1338 bytes)
W0108 12:37:19.663245 6872 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083_empty.pem, impossibly tiny 0 bytes
I0108 12:37:19.663256 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca-key.pem (1675 bytes)
I0108 12:37:19.663337 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/ca.pem (1082 bytes)
I0108 12:37:19.663374 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/cert.pem (1123 bytes)
I0108 12:37:19.663408 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/certs/key.pem (1675 bytes)
I0108 12:37:19.663484 6872 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem (1708 bytes)
I0108 12:37:19.663522 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem -> /usr/share/ca-certificates/40832.pem
I0108 12:37:19.663545 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0108 12:37:19.663566 6872 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem -> /usr/share/ca-certificates/4083.pem
I0108 12:37:19.664069 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 12:37:19.683555 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 12:37:19.700970 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 12:37:19.719044 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/profiles/ingress-addon-legacy-123658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0108 12:37:19.736200 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 12:37:19.753513 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 12:37:19.770679 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 12:37:19.788217 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0108 12:37:19.806214 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/files/etc/ssl/certs/40832.pem --> /usr/share/ca-certificates/40832.pem (1708 bytes)
I0108 12:37:19.823864 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 12:37:19.841502 6872 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-2761/.minikube/certs/4083.pem --> /usr/share/ca-certificates/4083.pem (1338 bytes)
I0108 12:37:19.859216 6872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 12:37:19.872748 6872 ssh_runner.go:195] Run: openssl version
I0108 12:37:19.878388 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4083.pem && ln -fs /usr/share/ca-certificates/4083.pem /etc/ssl/certs/4083.pem"
I0108 12:37:19.886764 6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4083.pem
I0108 12:37:19.891015 6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 8 20:32 /usr/share/ca-certificates/4083.pem
I0108 12:37:19.891075 6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4083.pem
I0108 12:37:19.896648 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4083.pem /etc/ssl/certs/51391683.0"
I0108 12:37:19.904794 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40832.pem && ln -fs /usr/share/ca-certificates/40832.pem /etc/ssl/certs/40832.pem"
I0108 12:37:19.913305 6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40832.pem
I0108 12:37:19.917611 6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 8 20:32 /usr/share/ca-certificates/40832.pem
I0108 12:37:19.917666 6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40832.pem
I0108 12:37:19.923187 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40832.pem /etc/ssl/certs/3ec20f2e.0"
I0108 12:37:19.931457 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 12:37:19.939545 6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 12:37:19.943814 6872 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 8 20:27 /usr/share/ca-certificates/minikubeCA.pem
I0108 12:37:19.943910 6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 12:37:19.949473 6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 12:37:19.957589 6872 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-123658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-123658 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 12:37:19.957793 6872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 12:37:19.980719 6872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 12:37:19.988846 6872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 12:37:19.996341 6872 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 12:37:19.996435 6872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 12:37:20.004005 6872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 12:37:20.004033 6872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 12:37:20.053056 6872 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I0108 12:37:20.053093 6872 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 12:37:20.357188 6872 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 12:37:20.357277 6872 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 12:37:20.357399 6872 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 12:37:20.579992 6872 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 12:37:20.580927 6872 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 12:37:20.580974 6872 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0108 12:37:20.649044 6872 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 12:37:20.671978 6872 out.go:204] - Generating certificates and keys ...
I0108 12:37:20.672060 6872 kubeadm.go:317] [certs] Using existing ca certificate authority
I0108 12:37:20.672123 6872 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0108 12:37:20.731453 6872 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0108 12:37:20.808804 6872 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0108 12:37:20.847089 6872 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0108 12:37:21.227315 6872 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0108 12:37:21.416597 6872 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0108 12:37:21.416773 6872 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 12:37:21.468123 6872 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0108 12:37:21.468228 6872 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 12:37:21.650473 6872 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0108 12:37:21.701479 6872 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0108 12:37:21.815610 6872 kubeadm.go:317] [certs] Generating "sa" key and public key
I0108 12:37:21.815700 6872 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 12:37:21.999951 6872 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 12:37:22.071347 6872 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 12:37:22.117671 6872 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 12:37:22.223521 6872 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 12:37:22.224301 6872 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 12:37:22.245954 6872 out.go:204] - Booting up control plane ...
I0108 12:37:22.246043 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 12:37:22.246117 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 12:37:22.246180 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 12:37:22.246268 6872 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 12:37:22.246397 6872 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 12:38:02.234106 6872 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0108 12:38:02.234853 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:38:02.235044 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:38:07.236415 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:38:07.236641 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:38:17.237099 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:38:17.237282 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:38:37.238652 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:38:37.238858 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:39:17.238826 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:39:17.238989 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:39:17.239007 6872 kubeadm.go:317]
I0108 12:39:17.239052 6872 kubeadm.go:317] Unfortunately, an error has occurred:
I0108 12:39:17.239102 6872 kubeadm.go:317] timed out waiting for the condition
I0108 12:39:17.239114 6872 kubeadm.go:317]
I0108 12:39:17.239150 6872 kubeadm.go:317] This error is likely caused by:
I0108 12:39:17.239182 6872 kubeadm.go:317] - The kubelet is not running
I0108 12:39:17.239289 6872 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 12:39:17.239305 6872 kubeadm.go:317]
I0108 12:39:17.239431 6872 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 12:39:17.239492 6872 kubeadm.go:317] - 'systemctl status kubelet'
I0108 12:39:17.239531 6872 kubeadm.go:317] - 'journalctl -xeu kubelet'
I0108 12:39:17.239546 6872 kubeadm.go:317]
I0108 12:39:17.239659 6872 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 12:39:17.239732 6872 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 12:39:17.239738 6872 kubeadm.go:317]
I0108 12:39:17.239827 6872 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I0108 12:39:17.239863 6872 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I0108 12:39:17.239920 6872 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I0108 12:39:17.239976 6872 kubeadm.go:317] - 'docker logs CONTAINERID'
I0108 12:39:17.239995 6872 kubeadm.go:317]
I0108 12:39:17.242130 6872 kubeadm.go:317] W0108 20:37:20.052101 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 12:39:17.242192 6872 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 12:39:17.242290 6872 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
I0108 12:39:17.242377 6872 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 12:39:17.242499 6872 kubeadm.go:317] W0108 20:37:22.229132 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 12:39:17.242609 6872 kubeadm.go:317] W0108 20:37:22.230118 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 12:39:17.242669 6872 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 12:39:17.242737 6872 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W0108 12:39:17.242976 6872 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:37:20.052101 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:37:22.229132 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:37:22.230118 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-123658 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:37:20.052101 958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:37:22.229132 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:37:22.230118 958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0108 12:39:17.243014 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0108 12:39:17.658029 6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 12:39:17.667913 6872 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0108 12:39:17.667980 6872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 12:39:17.675447 6872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 12:39:17.675474 6872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 12:39:17.722107 6872 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I0108 12:39:17.722167 6872 kubeadm.go:317] [preflight] Running pre-flight checks
I0108 12:39:18.009810 6872 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 12:39:18.009898 6872 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 12:39:18.009967 6872 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 12:39:18.228579 6872 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 12:39:18.242865 6872 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 12:39:18.242899 6872 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0108 12:39:18.298448 6872 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 12:39:18.320227 6872 out.go:204] - Generating certificates and keys ...
I0108 12:39:18.320331 6872 kubeadm.go:317] [certs] Using existing ca certificate authority
I0108 12:39:18.320403 6872 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0108 12:39:18.320481 6872 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0108 12:39:18.320576 6872 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I0108 12:39:18.320731 6872 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I0108 12:39:18.320799 6872 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I0108 12:39:18.320889 6872 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I0108 12:39:18.320949 6872 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I0108 12:39:18.321041 6872 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0108 12:39:18.321140 6872 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0108 12:39:18.321209 6872 kubeadm.go:317] [certs] Using the existing "sa" key
I0108 12:39:18.321271 6872 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 12:39:18.466596 6872 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 12:39:18.669614 6872 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 12:39:18.882171 6872 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 12:39:18.967242 6872 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 12:39:18.968004 6872 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 12:39:18.991842 6872 out.go:204] - Booting up control plane ...
I0108 12:39:18.992081 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 12:39:18.992245 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 12:39:18.992384 6872 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 12:39:18.992571 6872 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 12:39:18.992843 6872 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 12:39:58.976690 6872 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0108 12:39:58.977491 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:39:58.977714 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:40:03.978945 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:40:03.979148 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:40:13.980539 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:40:13.980748 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:40:33.982885 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:40:33.983103 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:41:13.983801 6872 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 12:41:13.984085 6872 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 12:41:13.984107 6872 kubeadm.go:317]
I0108 12:41:13.984182 6872 kubeadm.go:317] Unfortunately, an error has occurred:
I0108 12:41:13.984231 6872 kubeadm.go:317] timed out waiting for the condition
I0108 12:41:13.984241 6872 kubeadm.go:317]
I0108 12:41:13.984276 6872 kubeadm.go:317] This error is likely caused by:
I0108 12:41:13.984321 6872 kubeadm.go:317] - The kubelet is not running
I0108 12:41:13.984430 6872 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 12:41:13.984440 6872 kubeadm.go:317]
I0108 12:41:13.984540 6872 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 12:41:13.984607 6872 kubeadm.go:317] - 'systemctl status kubelet'
I0108 12:41:13.984657 6872 kubeadm.go:317] - 'journalctl -xeu kubelet'
I0108 12:41:13.984668 6872 kubeadm.go:317]
I0108 12:41:13.984805 6872 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 12:41:13.984909 6872 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 12:41:13.984923 6872 kubeadm.go:317]
I0108 12:41:13.985039 6872 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I0108 12:41:13.985115 6872 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I0108 12:41:13.985212 6872 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I0108 12:41:13.985250 6872 kubeadm.go:317] - 'docker logs CONTAINERID'
I0108 12:41:13.985260 6872 kubeadm.go:317]
I0108 12:41:13.988378 6872 kubeadm.go:317] W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 12:41:13.988452 6872 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 12:41:13.988566 6872 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
I0108 12:41:13.988650 6872 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 12:41:13.988740 6872 kubeadm.go:317] W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 12:41:13.988827 6872 kubeadm.go:317] W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 12:41:13.988912 6872 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 12:41:13.988978 6872 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0108 12:41:13.988995 6872 kubeadm.go:398] StartCluster complete in 3m54.034168221s
I0108 12:41:13.989095 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0108 12:41:14.011956 6872 logs.go:274] 0 containers: []
W0108 12:41:14.011970 6872 logs.go:276] No container was found matching "kube-apiserver"
I0108 12:41:14.012053 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0108 12:41:14.035075 6872 logs.go:274] 0 containers: []
W0108 12:41:14.035089 6872 logs.go:276] No container was found matching "etcd"
I0108 12:41:14.035184 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0108 12:41:14.057359 6872 logs.go:274] 0 containers: []
W0108 12:41:14.057372 6872 logs.go:276] No container was found matching "coredns"
I0108 12:41:14.057453 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0108 12:41:14.080972 6872 logs.go:274] 0 containers: []
W0108 12:41:14.080987 6872 logs.go:276] No container was found matching "kube-scheduler"
I0108 12:41:14.081081 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0108 12:41:14.103989 6872 logs.go:274] 0 containers: []
W0108 12:41:14.104002 6872 logs.go:276] No container was found matching "kube-proxy"
I0108 12:41:14.104093 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0108 12:41:14.127572 6872 logs.go:274] 0 containers: []
W0108 12:41:14.127586 6872 logs.go:276] No container was found matching "kubernetes-dashboard"
I0108 12:41:14.127675 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0108 12:41:14.150493 6872 logs.go:274] 0 containers: []
W0108 12:41:14.150508 6872 logs.go:276] No container was found matching "storage-provisioner"
I0108 12:41:14.150591 6872 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0108 12:41:14.172731 6872 logs.go:274] 0 containers: []
W0108 12:41:14.172745 6872 logs.go:276] No container was found matching "kube-controller-manager"
I0108 12:41:14.172753 6872 logs.go:123] Gathering logs for container status ...
I0108 12:41:14.172760 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0108 12:41:16.225088 6872 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052340542s)
I0108 12:41:16.225244 6872 logs.go:123] Gathering logs for kubelet ...
I0108 12:41:16.225253 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0108 12:41:16.264127 6872 logs.go:123] Gathering logs for dmesg ...
I0108 12:41:16.264141 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0108 12:41:16.277154 6872 logs.go:123] Gathering logs for describe nodes ...
I0108 12:41:16.277168 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0108 12:41:16.329730 6872 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0108 12:41:16.329742 6872 logs.go:123] Gathering logs for Docker ...
I0108 12:41:16.329751 6872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W0108 12:41:16.345256 6872 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 12:41:16.345280 6872 out.go:239] *
*
W0108 12:41:16.345405 6872 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 12:41:16.345426 6872 out.go:239] *
*
W0108 12:41:16.346035 6872 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 12:41:16.410910 6872 out.go:177]
W0108 12:41:16.455008 6872 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0108 20:39:17.721599 3452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0108 20:39:18.972373 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0108 20:39:18.973116 3452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 12:41:16.455238 6872 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0108 12:41:16.455353 6872 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0108 12:41:16.512484 6872 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-123658 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (258.44s)