=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0114 02:16:43.047532 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:18:59.197021 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:19:19.880571 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.886306 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.898305 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.920482 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.961638 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.043006 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.203444 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.525609 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:21.167284 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:22.447830 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:25.009127 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:26.891142 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:19:30.131489 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:40.372235 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:20:00.854790 2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m28.292269516s)
-- stdout --
* [ingress-addon-legacy-021549] minikube v1.28.0 on Darwin 13.0.1
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-021549 in cluster ingress-addon-legacy-021549
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0114 02:15:49.568610 5508 out.go:296] Setting OutFile to fd 1 ...
I0114 02:15:49.568807 5508 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 02:15:49.568813 5508 out.go:309] Setting ErrFile to fd 2...
I0114 02:15:49.568817 5508 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 02:15:49.568924 5508 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
I0114 02:15:49.569452 5508 out.go:303] Setting JSON to false
I0114 02:15:49.588247 5508 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":923,"bootTime":1673690426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0114 02:15:49.588329 5508 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0114 02:15:49.609910 5508 out.go:177] * [ingress-addon-legacy-021549] minikube v1.28.0 on Darwin 13.0.1
I0114 02:15:49.653885 5508 notify.go:220] Checking for updates...
I0114 02:15:49.675684 5508 out.go:177] - MINIKUBE_LOCATION=15642
I0114 02:15:49.696683 5508 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
I0114 02:15:49.718956 5508 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0114 02:15:49.741960 5508 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 02:15:49.763647 5508 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
I0114 02:15:49.784884 5508 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 02:15:49.846989 5508 docker.go:138] docker version: linux-20.10.21
I0114 02:15:49.847121 5508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 02:15:49.986360 5508 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:15:49.896274223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0114 02:15:50.062088 5508 out.go:177] * Using the docker driver based on user configuration
I0114 02:15:50.083980 5508 start.go:294] selected driver: docker
I0114 02:15:50.084007 5508 start.go:838] validating driver "docker" against <nil>
I0114 02:15:50.084031 5508 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 02:15:50.087838 5508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0114 02:15:50.226451 5508 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:15:50.137455719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I0114 02:15:50.226550 5508 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0114 02:15:50.226690 5508 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0114 02:15:50.248536 5508 out.go:177] * Using Docker Desktop driver with root privileges
I0114 02:15:50.270022 5508 cni.go:95] Creating CNI manager for ""
I0114 02:15:50.270055 5508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 02:15:50.270075 5508 start_flags.go:319] config:
{Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 02:15:50.292428 5508 out.go:177] * Starting control plane node ingress-addon-legacy-021549 in cluster ingress-addon-legacy-021549
I0114 02:15:50.334283 5508 cache.go:120] Beginning downloading kic base image for docker with docker
I0114 02:15:50.356123 5508 out.go:177] * Pulling base image ...
I0114 02:15:50.398236 5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0114 02:15:50.398291 5508 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
I0114 02:15:50.455865 5508 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
I0114 02:15:50.455890 5508 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
I0114 02:15:50.497540 5508 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0114 02:15:50.497562 5508 cache.go:57] Caching tarball of preloaded images
I0114 02:15:50.497944 5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0114 02:15:50.542054 5508 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0114 02:15:50.563116 5508 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0114 02:15:50.798974 5508 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0114 02:16:07.892557 5508 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0114 02:16:07.892751 5508 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0114 02:16:08.509481 5508 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0114 02:16:08.509760 5508 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json ...
I0114 02:16:08.509794 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json: {Name:mk43621aa12416a727dfcfd39a1b8a9c87a82a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:08.510104 5508 cache.go:193] Successfully downloaded all kic artifacts
I0114 02:16:08.510129 5508 start.go:364] acquiring machines lock for ingress-addon-legacy-021549: {Name:mk059708f1ee422c2c43c60a6ec8d2062f575157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 02:16:08.510303 5508 start.go:368] acquired machines lock for "ingress-addon-legacy-021549" in 163.922µs
I0114 02:16:08.510329 5508 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 02:16:08.510401 5508 start.go:125] createHost starting for "" (driver="docker")
I0114 02:16:08.564442 5508 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0114 02:16:08.564773 5508 start.go:159] libmachine.API.Create for "ingress-addon-legacy-021549" (driver="docker")
I0114 02:16:08.564826 5508 client.go:168] LocalClient.Create starting
I0114 02:16:08.565030 5508 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
I0114 02:16:08.565115 5508 main.go:134] libmachine: Decoding PEM data...
I0114 02:16:08.565146 5508 main.go:134] libmachine: Parsing certificate...
I0114 02:16:08.565244 5508 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
I0114 02:16:08.565313 5508 main.go:134] libmachine: Decoding PEM data...
I0114 02:16:08.565330 5508 main.go:134] libmachine: Parsing certificate...
I0114 02:16:08.566278 5508 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-021549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0114 02:16:08.624768 5508 cli_runner.go:211] docker network inspect ingress-addon-legacy-021549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0114 02:16:08.624878 5508 network_create.go:280] running [docker network inspect ingress-addon-legacy-021549] to gather additional debugging logs...
I0114 02:16:08.624901 5508 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-021549
W0114 02:16:08.678546 5508 cli_runner.go:211] docker network inspect ingress-addon-legacy-021549 returned with exit code 1
I0114 02:16:08.678575 5508 network_create.go:283] error running [docker network inspect ingress-addon-legacy-021549]: docker network inspect ingress-addon-legacy-021549: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-021549
I0114 02:16:08.678598 5508 network_create.go:285] output of [docker network inspect ingress-addon-legacy-021549]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-021549
** /stderr **
I0114 02:16:08.678713 5508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0114 02:16:08.733500 5508 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b8f418] misses:0}
I0114 02:16:08.733540 5508 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0114 02:16:08.733556 5508 network_create.go:123] attempt to create docker network ingress-addon-legacy-021549 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0114 02:16:08.733650 5508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 ingress-addon-legacy-021549
I0114 02:16:08.829530 5508 network_create.go:107] docker network ingress-addon-legacy-021549 192.168.49.0/24 created
I0114 02:16:08.829567 5508 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-021549" container
I0114 02:16:08.829696 5508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0114 02:16:08.883708 5508 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-021549 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --label created_by.minikube.sigs.k8s.io=true
I0114 02:16:08.938418 5508 oci.go:103] Successfully created a docker volume ingress-addon-legacy-021549
I0114 02:16:08.938546 5508 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-021549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --entrypoint /usr/bin/test -v ingress-addon-legacy-021549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
I0114 02:16:09.349830 5508 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-021549
I0114 02:16:09.349899 5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0114 02:16:09.349918 5508 kic.go:190] Starting extracting preloaded images to volume ...
I0114 02:16:09.350053 5508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-021549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
I0114 02:16:15.602885 5508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-021549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.252627335s)
I0114 02:16:15.602905 5508 kic.go:199] duration metric: took 6.252899 seconds to extract preloaded images to volume
I0114 02:16:15.603033 5508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0114 02:16:15.770136 5508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-021549 --name ingress-addon-legacy-021549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --network ingress-addon-legacy-021549 --ip 192.168.49.2 --volume ingress-addon-legacy-021549:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
I0114 02:16:16.116719 5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Running}}
I0114 02:16:16.175634 5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
I0114 02:16:16.237436 5508 cli_runner.go:164] Run: docker exec ingress-addon-legacy-021549 stat /var/lib/dpkg/alternatives/iptables
I0114 02:16:16.357371 5508 oci.go:144] the created container "ingress-addon-legacy-021549" has a running status.
I0114 02:16:16.357400 5508 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa...
I0114 02:16:16.429911 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0114 02:16:16.430002 5508 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0114 02:16:16.538539 5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
I0114 02:16:16.596624 5508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0114 02:16:16.596644 5508 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-021549 chown docker:docker /home/docker/.ssh/authorized_keys]
I0114 02:16:16.702136 5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
I0114 02:16:16.759921 5508 machine.go:88] provisioning docker machine ...
I0114 02:16:16.759973 5508 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-021549"
I0114 02:16:16.760088 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:16.816116 5508 main.go:134] libmachine: Using SSH client type: native
I0114 02:16:16.816309 5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50531 <nil> <nil>}
I0114 02:16:16.816326 5508 main.go:134] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-021549 && echo "ingress-addon-legacy-021549" | sudo tee /etc/hostname
I0114 02:16:16.943306 5508 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-021549
I0114 02:16:16.943399 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:17.000510 5508 main.go:134] libmachine: Using SSH client type: native
I0114 02:16:17.000670 5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50531 <nil> <nil>}
I0114 02:16:17.000688 5508 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-021549' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-021549/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-021549' | sudo tee -a /etc/hosts;
fi
fi
I0114 02:16:17.118880 5508 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0114 02:16:17.118907 5508 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
I0114 02:16:17.118932 5508 ubuntu.go:177] setting up certificates
I0114 02:16:17.118940 5508 provision.go:83] configureAuth start
I0114 02:16:17.119034 5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
I0114 02:16:17.175300 5508 provision.go:138] copyHostCerts
I0114 02:16:17.175347 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
I0114 02:16:17.175426 5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
I0114 02:16:17.175433 5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
I0114 02:16:17.175543 5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
I0114 02:16:17.175710 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
I0114 02:16:17.175753 5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
I0114 02:16:17.175757 5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
I0114 02:16:17.175825 5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
I0114 02:16:17.175956 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
I0114 02:16:17.175991 5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
I0114 02:16:17.175996 5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
I0114 02:16:17.176062 5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
I0114 02:16:17.176192 5508 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-021549 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-021549]
I0114 02:16:17.260975 5508 provision.go:172] copyRemoteCerts
I0114 02:16:17.261032 5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0114 02:16:17.261098 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:17.318655 5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
I0114 02:16:17.405443 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
I0114 02:16:17.405541 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0114 02:16:17.422345 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0114 02:16:17.422439 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0114 02:16:17.438909 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0114 02:16:17.439001 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0114 02:16:17.456102 5508 provision.go:86] duration metric: configureAuth took 337.145539ms
I0114 02:16:17.456115 5508 ubuntu.go:193] setting minikube options for container-runtime
I0114 02:16:17.456280 5508 config.go:180] Loaded profile config "ingress-addon-legacy-021549": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0114 02:16:17.456350 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:17.513824 5508 main.go:134] libmachine: Using SSH client type: native
I0114 02:16:17.513988 5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50531 <nil> <nil>}
I0114 02:16:17.514005 5508 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0114 02:16:17.632737 5508 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I0114 02:16:17.632760 5508 ubuntu.go:71] root file system type: overlay
I0114 02:16:17.632933 5508 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0114 02:16:17.633032 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:17.689557 5508 main.go:134] libmachine: Using SSH client type: native
I0114 02:16:17.689731 5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50531 <nil> <nil>}
I0114 02:16:17.689779 5508 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0114 02:16:17.816223 5508 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0114 02:16:17.816337 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:17.872992 5508 main.go:134] libmachine: Using SSH client type: native
I0114 02:16:17.873145 5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 127.0.0.1 50531 <nil> <nil>}
I0114 02:16:17.873158 5508 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0114 02:16:18.465964 5508 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-25 18:00:04.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-14 10:16:17.814414255 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0114 02:16:18.465986 5508 machine.go:91] provisioned docker machine in 1.706010308s
I0114 02:16:18.465992 5508 client.go:171] LocalClient.Create took 9.901011827s
I0114 02:16:18.466009 5508 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-021549" took 9.901091208s
I0114 02:16:18.466019 5508 start.go:300] post-start starting for "ingress-addon-legacy-021549" (driver="docker")
I0114 02:16:18.466025 5508 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0114 02:16:18.466100 5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0114 02:16:18.466163 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:18.523743 5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
I0114 02:16:18.610660 5508 ssh_runner.go:195] Run: cat /etc/os-release
I0114 02:16:18.614261 5508 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0114 02:16:18.614280 5508 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0114 02:16:18.614289 5508 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0114 02:16:18.614295 5508 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0114 02:16:18.614305 5508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
I0114 02:16:18.614412 5508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
I0114 02:16:18.614597 5508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
I0114 02:16:18.614604 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
I0114 02:16:18.614810 5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0114 02:16:18.622075 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
I0114 02:16:18.639333 5508 start.go:303] post-start completed in 173.302438ms
I0114 02:16:18.639888 5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
I0114 02:16:18.696828 5508 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json ...
I0114 02:16:18.697269 5508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0114 02:16:18.697333 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:18.754033 5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
I0114 02:16:18.838178 5508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0114 02:16:18.842737 5508 start.go:128] duration metric: createHost completed in 10.332173272s
I0114 02:16:18.842754 5508 start.go:83] releasing machines lock for "ingress-addon-legacy-021549", held for 10.332287068s
I0114 02:16:18.842856 5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
I0114 02:16:18.899155 5508 ssh_runner.go:195] Run: cat /version.json
I0114 02:16:18.899182 5508 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0114 02:16:18.899236 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:18.899272 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:18.959908 5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
I0114 02:16:18.959926 5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
I0114 02:16:19.043049 5508 ssh_runner.go:195] Run: systemctl --version
I0114 02:16:19.316319 5508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0114 02:16:19.326456 5508 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0114 02:16:19.326522 5508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0114 02:16:19.335843 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0114 02:16:19.348894 5508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0114 02:16:19.420605 5508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0114 02:16:19.488960 5508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 02:16:19.552634 5508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0114 02:16:19.750139 5508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 02:16:19.778791 5508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 02:16:19.829766 5508 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
I0114 02:16:19.829989 5508 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-021549 dig +short host.docker.internal
I0114 02:16:19.934513 5508 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0114 02:16:19.934640 5508 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0114 02:16:19.939154 5508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0114 02:16:19.949032 5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
I0114 02:16:20.005549 5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0114 02:16:20.005639 5508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 02:16:20.029705 5508 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0114 02:16:20.029721 5508 docker.go:543] Images already preloaded, skipping extraction
I0114 02:16:20.029829 5508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 02:16:20.053675 5508 docker.go:613] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0114 02:16:20.053701 5508 cache_images.go:84] Images are preloaded, skipping loading
I0114 02:16:20.053799 5508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0114 02:16:20.123036 5508 cni.go:95] Creating CNI manager for ""
I0114 02:16:20.123051 5508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 02:16:20.123067 5508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0114 02:16:20.123083 5508 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-021549 NodeName:ingress-addon-legacy-021549 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0114 02:16:20.123212 5508 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-021549"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0114 02:16:20.123298 5508 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-021549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0114 02:16:20.123371 5508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0114 02:16:20.131349 5508 binaries.go:44] Found k8s binaries, skipping transfer
I0114 02:16:20.131424 5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0114 02:16:20.138933 5508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0114 02:16:20.151755 5508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0114 02:16:20.164673 5508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
I0114 02:16:20.177574 5508 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0114 02:16:20.181360 5508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0114 02:16:20.191011 5508 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549 for IP: 192.168.49.2
I0114 02:16:20.191147 5508 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
I0114 02:16:20.191218 5508 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
I0114 02:16:20.191268 5508 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key
I0114 02:16:20.191295 5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt with IP's: []
I0114 02:16:20.333668 5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt ...
I0114 02:16:20.333681 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt: {Name:mk84abce9af1b89be3a255209fde1b99bb8c0a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.333994 5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key ...
I0114 02:16:20.334002 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key: {Name:mkd635305ba708619c27a171239eb62e5058521a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.334210 5508 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2
I0114 02:16:20.334253 5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0114 02:16:20.393935 5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 ...
I0114 02:16:20.393944 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2: {Name:mk2a745774da1cc1a8385e74128b2bf2cb76adb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.394164 5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2 ...
I0114 02:16:20.394172 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2: {Name:mke955028d483a9e517264459bc3dfa9777cd029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.394367 5508 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt
I0114 02:16:20.394540 5508 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key
I0114 02:16:20.394712 5508 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key
I0114 02:16:20.394733 5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt with IP's: []
I0114 02:16:20.597156 5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt ...
I0114 02:16:20.597165 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt: {Name:mk5790178e92ab6a43073067029e3e1ecad8a3eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.597431 5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key ...
I0114 02:16:20.597439 5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key: {Name:mk6ec9aa2687075972c72cdd91b14bf36e1ceaa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 02:16:20.597769 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0114 02:16:20.597803 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0114 02:16:20.597827 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0114 02:16:20.597859 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0114 02:16:20.597907 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0114 02:16:20.597952 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0114 02:16:20.598013 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0114 02:16:20.598034 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0114 02:16:20.598188 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
W0114 02:16:20.598276 5508 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
I0114 02:16:20.598324 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
I0114 02:16:20.598381 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
I0114 02:16:20.598444 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
I0114 02:16:20.598485 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
I0114 02:16:20.598555 5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
I0114 02:16:20.598623 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
I0114 02:16:20.598679 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0114 02:16:20.598699 5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
I0114 02:16:20.599190 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0114 02:16:20.617477 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0114 02:16:20.634274 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0114 02:16:20.651033 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0114 02:16:20.668237 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0114 02:16:20.685131 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0114 02:16:20.701794 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0114 02:16:20.718818 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0114 02:16:20.735896 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
I0114 02:16:20.753024 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0114 02:16:20.770004 5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
I0114 02:16:20.787044 5508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0114 02:16:20.799538 5508 ssh_runner.go:195] Run: openssl version
I0114 02:16:20.805058 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
I0114 02:16:20.813041 5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
I0114 02:16:20.816951 5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
I0114 02:16:20.817022 5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
I0114 02:16:20.822470 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
I0114 02:16:20.830432 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
I0114 02:16:20.838493 5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
I0114 02:16:20.842295 5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
I0114 02:16:20.842350 5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
I0114 02:16:20.847656 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
I0114 02:16:20.855635 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0114 02:16:20.863498 5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0114 02:16:20.867443 5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
I0114 02:16:20.867487 5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0114 02:16:20.872876 5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0114 02:16:20.880982 5508 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 02:16:20.881112 5508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0114 02:16:20.904091 5508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0114 02:16:20.911776 5508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0114 02:16:20.918932 5508 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0114 02:16:20.919001 5508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 02:16:20.926410 5508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0114 02:16:20.926432 5508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0114 02:16:20.974583 5508 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I0114 02:16:20.974639 5508 kubeadm.go:317] [preflight] Running pre-flight checks
I0114 02:16:21.265850 5508 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0114 02:16:21.265971 5508 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0114 02:16:21.266051 5508 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0114 02:16:21.485730 5508 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0114 02:16:21.486271 5508 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0114 02:16:21.486317 5508 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0114 02:16:21.558805 5508 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0114 02:16:21.601974 5508 out.go:204] - Generating certificates and keys ...
I0114 02:16:21.602053 5508 kubeadm.go:317] [certs] Using existing ca certificate authority
I0114 02:16:21.602144 5508 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0114 02:16:21.604735 5508 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0114 02:16:21.801044 5508 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0114 02:16:22.011731 5508 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0114 02:16:22.204268 5508 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0114 02:16:22.314876 5508 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0114 02:16:22.315057 5508 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0114 02:16:22.474058 5508 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0114 02:16:22.474185 5508 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0114 02:16:22.541108 5508 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0114 02:16:22.734579 5508 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0114 02:16:22.864708 5508 kubeadm.go:317] [certs] Generating "sa" key and public key
I0114 02:16:22.864777 5508 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0114 02:16:23.090639 5508 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0114 02:16:23.169573 5508 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0114 02:16:23.371720 5508 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0114 02:16:23.566389 5508 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0114 02:16:23.566783 5508 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0114 02:16:23.610331 5508 out.go:204] - Booting up control plane ...
I0114 02:16:23.610572 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0114 02:16:23.610723 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0114 02:16:23.610867 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0114 02:16:23.611005 5508 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0114 02:16:23.611258 5508 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0114 02:17:03.576892 5508 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0114 02:17:03.578031 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:17:03.578234 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:17:08.580183 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:17:08.580396 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:17:18.582121 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:17:18.582340 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:17:38.584131 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:17:38.584353 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:18:18.586273 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:18:18.586493 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:18:18.586513 5508 kubeadm.go:317]
I0114 02:18:18.586563 5508 kubeadm.go:317] Unfortunately, an error has occurred:
I0114 02:18:18.586611 5508 kubeadm.go:317] timed out waiting for the condition
I0114 02:18:18.586622 5508 kubeadm.go:317]
I0114 02:18:18.586684 5508 kubeadm.go:317] This error is likely caused by:
I0114 02:18:18.586744 5508 kubeadm.go:317] - The kubelet is not running
I0114 02:18:18.586864 5508 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0114 02:18:18.586879 5508 kubeadm.go:317]
I0114 02:18:18.586975 5508 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0114 02:18:18.587006 5508 kubeadm.go:317] - 'systemctl status kubelet'
I0114 02:18:18.587037 5508 kubeadm.go:317] - 'journalctl -xeu kubelet'
I0114 02:18:18.587042 5508 kubeadm.go:317]
I0114 02:18:18.587147 5508 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0114 02:18:18.587258 5508 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0114 02:18:18.587278 5508 kubeadm.go:317]
I0114 02:18:18.587368 5508 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I0114 02:18:18.587417 5508 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I0114 02:18:18.587512 5508 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I0114 02:18:18.587546 5508 kubeadm.go:317] - 'docker logs CONTAINERID'
I0114 02:18:18.587552 5508 kubeadm.go:317]
I0114 02:18:18.589747 5508 kubeadm.go:317] W0114 10:16:20.973569 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0114 02:18:18.589810 5508 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0114 02:18:18.589901 5508 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
I0114 02:18:18.589997 5508 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0114 02:18:18.590106 5508 kubeadm.go:317] W0114 10:16:23.571276 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0114 02:18:18.590205 5508 kubeadm.go:317] W0114 10:16:23.572040 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0114 02:18:18.590263 5508 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0114 02:18:18.590319 5508 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
W0114 02:18:18.590511 5508 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:16:20.973569 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:16:23.571276 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:16:23.572040 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:16:20.973569 955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:16:23.571276 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:16:23.572040 955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0114 02:18:18.590543 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0114 02:18:19.004655 5508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 02:18:19.014498 5508 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0114 02:18:19.014563 5508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 02:18:19.021863 5508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0114 02:18:19.021890 5508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0114 02:18:19.070041 5508 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
I0114 02:18:19.070085 5508 kubeadm.go:317] [preflight] Running pre-flight checks
I0114 02:18:19.363161 5508 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0114 02:18:19.363246 5508 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0114 02:18:19.363324 5508 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0114 02:18:19.585374 5508 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0114 02:18:19.593368 5508 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0114 02:18:19.593403 5508 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0114 02:18:19.657620 5508 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0114 02:18:19.679105 5508 out.go:204] - Generating certificates and keys ...
I0114 02:18:19.679206 5508 kubeadm.go:317] [certs] Using existing ca certificate authority
I0114 02:18:19.679275 5508 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0114 02:18:19.679350 5508 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0114 02:18:19.679415 5508 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
I0114 02:18:19.679497 5508 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
I0114 02:18:19.679550 5508 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
I0114 02:18:19.679601 5508 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
I0114 02:18:19.679689 5508 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
I0114 02:18:19.679773 5508 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0114 02:18:19.679839 5508 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0114 02:18:19.679881 5508 kubeadm.go:317] [certs] Using the existing "sa" key
I0114 02:18:19.679958 5508 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0114 02:18:19.841344 5508 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0114 02:18:20.086877 5508 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0114 02:18:20.153567 5508 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0114 02:18:20.210152 5508 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0114 02:18:20.210820 5508 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0114 02:18:20.232473 5508 out.go:204] - Booting up control plane ...
I0114 02:18:20.232605 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0114 02:18:20.232668 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0114 02:18:20.232756 5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0114 02:18:20.232826 5508 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0114 02:18:20.232978 5508 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0114 02:19:00.221626 5508 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
I0114 02:19:00.222576 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:19:00.222808 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:19:05.223409 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:19:05.223577 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:19:15.225765 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:19:15.225996 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:19:35.226508 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:19:35.226666 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:20:15.229417 5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0114 02:20:15.229641 5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0114 02:20:15.229662 5508 kubeadm.go:317]
I0114 02:20:15.229706 5508 kubeadm.go:317] Unfortunately, an error has occurred:
I0114 02:20:15.229758 5508 kubeadm.go:317] timed out waiting for the condition
I0114 02:20:15.229775 5508 kubeadm.go:317]
I0114 02:20:15.229824 5508 kubeadm.go:317] This error is likely caused by:
I0114 02:20:15.229859 5508 kubeadm.go:317] - The kubelet is not running
I0114 02:20:15.229981 5508 kubeadm.go:317] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0114 02:20:15.229995 5508 kubeadm.go:317]
I0114 02:20:15.230087 5508 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0114 02:20:15.230136 5508 kubeadm.go:317] - 'systemctl status kubelet'
I0114 02:20:15.230187 5508 kubeadm.go:317] - 'journalctl -xeu kubelet'
I0114 02:20:15.230202 5508 kubeadm.go:317]
I0114 02:20:15.230310 5508 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0114 02:20:15.230427 5508 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0114 02:20:15.230442 5508 kubeadm.go:317]
I0114 02:20:15.230553 5508 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
I0114 02:20:15.230621 5508 kubeadm.go:317] - 'docker ps -a | grep kube | grep -v pause'
I0114 02:20:15.230708 5508 kubeadm.go:317] Once you have found the failing container, you can inspect its logs with:
I0114 02:20:15.230750 5508 kubeadm.go:317] - 'docker logs CONTAINERID'
I0114 02:20:15.230762 5508 kubeadm.go:317]
I0114 02:20:15.232923 5508 kubeadm.go:317] W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0114 02:20:15.232989 5508 kubeadm.go:317] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0114 02:20:15.233096 5508 kubeadm.go:317] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
I0114 02:20:15.233189 5508 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0114 02:20:15.233290 5508 kubeadm.go:317] W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0114 02:20:15.233403 5508 kubeadm.go:317] W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0114 02:20:15.233470 5508 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0114 02:20:15.233523 5508 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
I0114 02:20:15.233560 5508 kubeadm.go:398] StartCluster complete in 3m54.34908771s
I0114 02:20:15.233657 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0114 02:20:15.256731 5508 logs.go:274] 0 containers: []
W0114 02:20:15.256745 5508 logs.go:276] No container was found matching "kube-apiserver"
I0114 02:20:15.256823 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0114 02:20:15.280428 5508 logs.go:274] 0 containers: []
W0114 02:20:15.280442 5508 logs.go:276] No container was found matching "etcd"
I0114 02:20:15.280525 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0114 02:20:15.303199 5508 logs.go:274] 0 containers: []
W0114 02:20:15.303212 5508 logs.go:276] No container was found matching "coredns"
I0114 02:20:15.303296 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0114 02:20:15.326883 5508 logs.go:274] 0 containers: []
W0114 02:20:15.326895 5508 logs.go:276] No container was found matching "kube-scheduler"
I0114 02:20:15.326980 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0114 02:20:15.349760 5508 logs.go:274] 0 containers: []
W0114 02:20:15.349773 5508 logs.go:276] No container was found matching "kube-proxy"
I0114 02:20:15.349859 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0114 02:20:15.373266 5508 logs.go:274] 0 containers: []
W0114 02:20:15.373282 5508 logs.go:276] No container was found matching "kubernetes-dashboard"
I0114 02:20:15.373365 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0114 02:20:15.395934 5508 logs.go:274] 0 containers: []
W0114 02:20:15.395947 5508 logs.go:276] No container was found matching "storage-provisioner"
I0114 02:20:15.396038 5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0114 02:20:15.418996 5508 logs.go:274] 0 containers: []
W0114 02:20:15.419008 5508 logs.go:276] No container was found matching "kube-controller-manager"
I0114 02:20:15.419018 5508 logs.go:123] Gathering logs for container status ...
I0114 02:20:15.419027 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0114 02:20:17.473489 5508 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054417949s)
I0114 02:20:17.473633 5508 logs.go:123] Gathering logs for kubelet ...
I0114 02:20:17.473640 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0114 02:20:17.512397 5508 logs.go:123] Gathering logs for dmesg ...
I0114 02:20:17.512409 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0114 02:20:17.525714 5508 logs.go:123] Gathering logs for describe nodes ...
I0114 02:20:17.525725 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0114 02:20:17.578861 5508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0114 02:20:17.578875 5508 logs.go:123] Gathering logs for Docker ...
I0114 02:20:17.578881 5508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W0114 02:20:17.594083 5508 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0114 02:20:17.594109 5508 out.go:239] *
*
W0114 02:20:17.594222 5508 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0114 02:20:17.594240 5508 out.go:239] *
*
W0114 02:20:17.594862 5508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0114 02:20:17.659571 5508 out.go:177]
W0114 02:20:17.723725 5508 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0114 10:18:19.068847 3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0114 10:18:20.214759 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0114 10:18:20.215665 3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0114 02:20:17.723871 5508 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0114 02:20:17.723946 5508 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0114 02:20:17.745778 5508 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (268.32s)