=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0222 20:32:26.929903 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:34:43.080232 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:35:03.149851 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.155453 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.165588 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.186138 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.226945 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.307438 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.467629 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:03.787740 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:04.428019 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:05.708120 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:08.268341 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:10.768346 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/addons-566000/client.crt: no such file or directory
E0222 20:35:13.388620 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:23.629576 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
E0222 20:35:44.109837 3133 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/functional-106000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m19.117969949s)
-- stdout --
* [ingress-addon-legacy-292000] minikube v1.29.0 on Darwin 13.2
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-292000 in cluster ingress-addon-legacy-292000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0222 20:32:04.364557 6079 out.go:296] Setting OutFile to fd 1 ...
I0222 20:32:04.364728 6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0222 20:32:04.364733 6079 out.go:309] Setting ErrFile to fd 2...
I0222 20:32:04.364737 6079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0222 20:32:04.364849 6079 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-2664/.minikube/bin
I0222 20:32:04.366288 6079 out.go:303] Setting JSON to false
I0222 20:32:04.384995 6079 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1899,"bootTime":1677124825,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0222 20:32:04.385061 6079 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0222 20:32:04.406908 6079 out.go:177] * [ingress-addon-legacy-292000] minikube v1.29.0 on Darwin 13.2
I0222 20:32:04.428916 6079 notify.go:220] Checking for updates...
I0222 20:32:04.450642 6079 out.go:177] - MINIKUBE_LOCATION=15909
I0222 20:32:04.472923 6079 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15909-2664/kubeconfig
I0222 20:32:04.494770 6079 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0222 20:32:04.515600 6079 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0222 20:32:04.536935 6079 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-2664/.minikube
I0222 20:32:04.558711 6079 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0222 20:32:04.579984 6079 driver.go:365] Setting default libvirt URI to qemu:///system
I0222 20:32:04.641015 6079 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
I0222 20:32:04.641173 6079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0222 20:32:04.784649 6079 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:32:04.690766088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0222 20:32:04.806603 6079 out.go:177] * Using the docker driver based on user configuration
I0222 20:32:04.850210 6079 start.go:296] selected driver: docker
I0222 20:32:04.850288 6079 start.go:857] validating driver "docker" against <nil>
I0222 20:32:04.850310 6079 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0222 20:32:04.854194 6079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0222 20:32:04.995790 6079 info.go:266] docker info: {ID:OZUI:YKWJ:UOTC:7ZLB:F6EB:CHWM:OTRK:2EBF:TWQA:NMVS:HOC3:AMA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 04:32:04.904263495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0222 20:32:04.995917 6079 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0222 20:32:04.996093 6079 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0222 20:32:05.020066 6079 out.go:177] * Using Docker Desktop driver with root privileges
I0222 20:32:05.041866 6079 cni.go:84] Creating CNI manager for ""
I0222 20:32:05.041905 6079 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0222 20:32:05.041923 6079 start_flags.go:319] config:
{Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0222 20:32:05.084783 6079 out.go:177] * Starting control plane node ingress-addon-legacy-292000 in cluster ingress-addon-legacy-292000
I0222 20:32:05.106943 6079 cache.go:120] Beginning downloading kic base image for docker with docker
I0222 20:32:05.128666 6079 out.go:177] * Pulling base image ...
I0222 20:32:05.170906 6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0222 20:32:05.170965 6079 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
I0222 20:32:05.226462 6079 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
I0222 20:32:05.226488 6079 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
I0222 20:32:05.283975 6079 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0222 20:32:05.284016 6079 cache.go:57] Caching tarball of preloaded images
I0222 20:32:05.284419 6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0222 20:32:05.306327 6079 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0222 20:32:05.349070 6079 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0222 20:32:05.582930 6079 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0222 20:32:13.443498 6079 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0222 20:32:13.443665 6079 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0222 20:32:14.066075 6079 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0222 20:32:14.066340 6079 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json ...
I0222 20:32:14.066366 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json: {Name:mkf72cad213af89d13db2bc5119e02acf8dda0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:14.066676 6079 cache.go:193] Successfully downloaded all kic artifacts
I0222 20:32:14.066701 6079 start.go:364] acquiring machines lock for ingress-addon-legacy-292000: {Name:mk4d7b66f3190c7c8ddc1c191fefbad8ee44f2ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0222 20:32:14.066831 6079 start.go:368] acquired machines lock for "ingress-addon-legacy-292000" in 122.725µs
I0222 20:32:14.066856 6079 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0222 20:32:14.066899 6079 start.go:125] createHost starting for "" (driver="docker")
I0222 20:32:14.101190 6079 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0222 20:32:14.101563 6079 start.go:159] libmachine.API.Create for "ingress-addon-legacy-292000" (driver="docker")
I0222 20:32:14.101606 6079 client.go:168] LocalClient.Create starting
I0222 20:32:14.101824 6079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem
I0222 20:32:14.101915 6079 main.go:141] libmachine: Decoding PEM data...
I0222 20:32:14.101952 6079 main.go:141] libmachine: Parsing certificate...
I0222 20:32:14.102064 6079 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem
I0222 20:32:14.102130 6079 main.go:141] libmachine: Decoding PEM data...
I0222 20:32:14.102147 6079 main.go:141] libmachine: Parsing certificate...
I0222 20:32:14.123885 6079 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0222 20:32:14.180732 6079 cli_runner.go:211] docker network inspect ingress-addon-legacy-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0222 20:32:14.180831 6079 network_create.go:281] running [docker network inspect ingress-addon-legacy-292000] to gather additional debugging logs...
I0222 20:32:14.180850 6079 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-292000
W0222 20:32:14.234568 6079 cli_runner.go:211] docker network inspect ingress-addon-legacy-292000 returned with exit code 1
I0222 20:32:14.234595 6079 network_create.go:284] error running [docker network inspect ingress-addon-legacy-292000]: docker network inspect ingress-addon-legacy-292000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-292000
I0222 20:32:14.234606 6079 network_create.go:286] output of [docker network inspect ingress-addon-legacy-292000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-292000
** /stderr **
I0222 20:32:14.234695 6079 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0222 20:32:14.289750 6079 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001275480}
I0222 20:32:14.289793 6079 network_create.go:123] attempt to create docker network ingress-addon-legacy-292000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0222 20:32:14.289869 6079 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 ingress-addon-legacy-292000
I0222 20:32:14.418843 6079 network_create.go:107] docker network ingress-addon-legacy-292000 192.168.49.0/24 created
I0222 20:32:14.418876 6079 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-292000" container
I0222 20:32:14.418991 6079 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0222 20:32:14.473957 6079 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-292000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --label created_by.minikube.sigs.k8s.io=true
I0222 20:32:14.530025 6079 oci.go:103] Successfully created a docker volume ingress-addon-legacy-292000
I0222 20:32:14.530188 6079 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-292000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --entrypoint /usr/bin/test -v ingress-addon-legacy-292000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
I0222 20:32:14.969321 6079 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-292000
I0222 20:32:14.969358 6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0222 20:32:14.969372 6079 kic.go:190] Starting extracting preloaded images to volume ...
I0222 20:32:14.969502 6079 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
I0222 20:32:21.077634 6079 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-2664/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.108149443s)
I0222 20:32:21.077652 6079 kic.go:199] duration metric: took 6.108350 seconds to extract preloaded images to volume
I0222 20:32:21.077770 6079 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0222 20:32:21.222237 6079 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-292000 --name ingress-addon-legacy-292000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-292000 --network ingress-addon-legacy-292000 --ip 192.168.49.2 --volume ingress-addon-legacy-292000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
I0222 20:32:21.581086 6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Running}}
I0222 20:32:21.644392 6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
I0222 20:32:21.709609 6079 cli_runner.go:164] Run: docker exec ingress-addon-legacy-292000 stat /var/lib/dpkg/alternatives/iptables
I0222 20:32:21.823524 6079 oci.go:144] the created container "ingress-addon-legacy-292000" has a running status.
I0222 20:32:21.823558 6079 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa...
I0222 20:32:21.951543 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0222 20:32:21.951616 6079 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0222 20:32:22.056541 6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
I0222 20:32:22.113152 6079 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0222 20:32:22.113171 6079 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-292000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0222 20:32:22.217736 6079 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-292000 --format={{.State.Status}}
I0222 20:32:22.276734 6079 machine.go:88] provisioning docker machine ...
I0222 20:32:22.276779 6079 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-292000"
I0222 20:32:22.276887 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:22.334835 6079 main.go:141] libmachine: Using SSH client type: native
I0222 20:32:22.335258 6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50506 <nil> <nil>}
I0222 20:32:22.335275 6079 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-292000 && echo "ingress-addon-legacy-292000" | sudo tee /etc/hostname
I0222 20:32:22.480058 6079 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-292000
I0222 20:32:22.480146 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:22.538189 6079 main.go:141] libmachine: Using SSH client type: native
I0222 20:32:22.538538 6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50506 <nil> <nil>}
I0222 20:32:22.538554 6079 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-292000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-292000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-292000' | sudo tee -a /etc/hosts;
fi
fi
I0222 20:32:22.673088 6079 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0222 20:32:22.673110 6079 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-2664/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-2664/.minikube}
I0222 20:32:22.673127 6079 ubuntu.go:177] setting up certificates
I0222 20:32:22.673134 6079 provision.go:83] configureAuth start
I0222 20:32:22.673208 6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
I0222 20:32:22.730337 6079 provision.go:138] copyHostCerts
I0222 20:32:22.730388 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
I0222 20:32:22.730454 6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem, removing ...
I0222 20:32:22.730461 6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem
I0222 20:32:22.730590 6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/cert.pem (1123 bytes)
I0222 20:32:22.730782 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
I0222 20:32:22.730825 6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem, removing ...
I0222 20:32:22.730830 6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem
I0222 20:32:22.730894 6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/key.pem (1675 bytes)
I0222 20:32:22.731013 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
I0222 20:32:22.731049 6079 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem, removing ...
I0222 20:32:22.731054 6079 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem
I0222 20:32:22.731120 6079 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.pem (1082 bytes)
I0222 20:32:22.731249 6079 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-292000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-292000]
I0222 20:32:22.786277 6079 provision.go:172] copyRemoteCerts
I0222 20:32:22.786349 6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0222 20:32:22.786405 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:22.843979 6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
I0222 20:32:22.939318 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0222 20:32:22.939405 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0222 20:32:22.956590 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem -> /etc/docker/server.pem
I0222 20:32:22.956677 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0222 20:32:22.973665 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0222 20:32:22.973743 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0222 20:32:22.990935 6079 provision.go:86] duration metric: configureAuth took 317.787881ms
I0222 20:32:22.990953 6079 ubuntu.go:193] setting minikube options for container-runtime
I0222 20:32:22.991174 6079 config.go:182] Loaded profile config "ingress-addon-legacy-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0222 20:32:22.991260 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:23.048578 6079 main.go:141] libmachine: Using SSH client type: native
I0222 20:32:23.048947 6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50506 <nil> <nil>}
I0222 20:32:23.048961 6079 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0222 20:32:23.183936 6079 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0222 20:32:23.183964 6079 ubuntu.go:71] root file system type: overlay
I0222 20:32:23.184061 6079 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0222 20:32:23.184181 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:23.243298 6079 main.go:141] libmachine: Using SSH client type: native
I0222 20:32:23.243658 6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50506 <nil> <nil>}
I0222 20:32:23.243721 6079 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0222 20:32:23.387699 6079 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0222 20:32:23.387801 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:23.446987 6079 main.go:141] libmachine: Using SSH client type: native
I0222 20:32:23.447364 6079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil> [] 0s} 127.0.0.1 50506 <nil> <nil>}
I0222 20:32:23.447377 6079 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0222 20:32:24.054813 6079 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-02-23 04:32:23.385827822 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0222 20:32:24.054846 6079 machine.go:91] provisioned docker machine in 1.778110761s
I0222 20:32:24.054851 6079 client.go:171] LocalClient.Create took 9.953353786s
I0222 20:32:24.054924 6079 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-292000" took 9.953474955s
I0222 20:32:24.054996 6079 start.go:300] post-start starting for "ingress-addon-legacy-292000" (driver="docker")
I0222 20:32:24.055008 6079 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0222 20:32:24.055138 6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0222 20:32:24.055218 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:24.116228 6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
I0222 20:32:24.212198 6079 ssh_runner.go:195] Run: cat /etc/os-release
I0222 20:32:24.215853 6079 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0222 20:32:24.215870 6079 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0222 20:32:24.215885 6079 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0222 20:32:24.215891 6079 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0222 20:32:24.215901 6079 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/addons for local assets ...
I0222 20:32:24.216001 6079 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-2664/.minikube/files for local assets ...
I0222 20:32:24.216176 6079 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> 31332.pem in /etc/ssl/certs
I0222 20:32:24.216182 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /etc/ssl/certs/31332.pem
I0222 20:32:24.216378 6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0222 20:32:24.223632 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /etc/ssl/certs/31332.pem (1708 bytes)
I0222 20:32:24.240926 6079 start.go:303] post-start completed in 185.916538ms
I0222 20:32:24.241452 6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
I0222 20:32:24.299964 6079 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/config.json ...
I0222 20:32:24.300383 6079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0222 20:32:24.300444 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:24.358430 6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
I0222 20:32:24.452899 6079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0222 20:32:24.457576 6079 start.go:128] duration metric: createHost completed in 10.390789341s
I0222 20:32:24.457591 6079 start.go:83] releasing machines lock for "ingress-addon-legacy-292000", held for 10.390872038s
I0222 20:32:24.457688 6079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-292000
I0222 20:32:24.515482 6079 ssh_runner.go:195] Run: cat /version.json
I0222 20:32:24.515525 6079 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0222 20:32:24.515555 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:24.515591 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:24.579954 6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
I0222 20:32:24.580122 6079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50506 SSHKeyPath:/Users/jenkins/minikube-integration/15909-2664/.minikube/machines/ingress-addon-legacy-292000/id_rsa Username:docker}
I0222 20:32:24.930410 6079 ssh_runner.go:195] Run: systemctl --version
I0222 20:32:24.934960 6079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0222 20:32:24.939793 6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0222 20:32:24.959226 6079 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0222 20:32:24.959305 6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0222 20:32:24.973401 6079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0222 20:32:24.981459 6079 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0222 20:32:24.981477 6079 start.go:485] detecting cgroup driver to use...
I0222 20:32:24.981487 6079 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0222 20:32:24.981563 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0222 20:32:24.994869 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0222 20:32:25.003582 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0222 20:32:25.012181 6079 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0222 20:32:25.012252 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0222 20:32:25.020945 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0222 20:32:25.029481 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0222 20:32:25.038042 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0222 20:32:25.046848 6079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0222 20:32:25.054689 6079 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0222 20:32:25.063078 6079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0222 20:32:25.070718 6079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0222 20:32:25.077781 6079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0222 20:32:25.145945 6079 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0222 20:32:25.218288 6079 start.go:485] detecting cgroup driver to use...
I0222 20:32:25.218309 6079 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0222 20:32:25.218374 6079 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0222 20:32:25.229819 6079 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0222 20:32:25.229909 6079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0222 20:32:25.241217 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0222 20:32:25.255509 6079 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0222 20:32:25.368775 6079 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0222 20:32:25.455324 6079 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0222 20:32:25.455361 6079 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0222 20:32:25.469783 6079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0222 20:32:25.557374 6079 ssh_runner.go:195] Run: sudo systemctl restart docker
I0222 20:32:25.778995 6079 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0222 20:32:25.806559 6079 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0222 20:32:25.856982 6079 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0222 20:32:25.857198 6079 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-292000 dig +short host.docker.internal
I0222 20:32:25.972637 6079 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0222 20:32:25.972740 6079 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0222 20:32:25.977126 6079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0222 20:32:25.986997 6079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-292000
I0222 20:32:26.044340 6079 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0222 20:32:26.044417 6079 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0222 20:32:26.065277 6079 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0222 20:32:26.065294 6079 docker.go:560] Images already preloaded, skipping extraction
I0222 20:32:26.065389 6079 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0222 20:32:26.085599 6079 docker.go:630] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0222 20:32:26.085625 6079 cache_images.go:84] Images are preloaded, skipping loading
I0222 20:32:26.085704 6079 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0222 20:32:26.112315 6079 cni.go:84] Creating CNI manager for ""
I0222 20:32:26.112335 6079 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0222 20:32:26.112348 6079 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0222 20:32:26.112371 6079 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-292000 NodeName:ingress-addon-legacy-292000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0222 20:32:26.112491 6079 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-292000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0222 20:32:26.112585 6079 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-292000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0222 20:32:26.112653 6079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0222 20:32:26.120691 6079 binaries.go:44] Found k8s binaries, skipping transfer
I0222 20:32:26.120753 6079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0222 20:32:26.128255 6079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0222 20:32:26.141000 6079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0222 20:32:26.154119 6079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0222 20:32:26.167278 6079 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0222 20:32:26.171871 6079 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0222 20:32:26.181807 6079 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000 for IP: 192.168.49.2
I0222 20:32:26.181825 6079 certs.go:186] acquiring lock for shared ca certs: {Name:mkb249024925691007345c8175e91f91eb2c1055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.182023 6079 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key
I0222 20:32:26.182094 6079 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key
I0222 20:32:26.182143 6079 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key
I0222 20:32:26.182155 6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt with IP's: []
I0222 20:32:26.304372 6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt ...
I0222 20:32:26.304383 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.crt: {Name:mk6ec94438c90edcd19fac817403ff3040b023c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.304689 6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key ...
I0222 20:32:26.304696 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/client.key: {Name:mk49af5056c983e08e2bb81ab9fc7215d6b81b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.304894 6079 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2
I0222 20:32:26.304910 6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0222 20:32:26.431795 6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 ...
I0222 20:32:26.431803 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2: {Name:mk55bf2c39823aa7d85fd59d9723a5e0bafb6355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.432035 6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2 ...
I0222 20:32:26.432043 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2: {Name:mk9d6d862608a480c18cd0167b4acdd396312265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.432224 6079 certs.go:333] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt
I0222 20:32:26.432386 6079 certs.go:337] copying /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key
I0222 20:32:26.432561 6079 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key
I0222 20:32:26.432576 6079 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt with IP's: []
I0222 20:32:26.507992 6079 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt ...
I0222 20:32:26.508000 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt: {Name:mkd141cc39f393f693905abe7d9dd8211695c7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.508342 6079 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key ...
I0222 20:32:26.508350 6079 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key: {Name:mk427519864f13b3c5d07c7ade41a7d5cc7d2659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0222 20:32:26.508538 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0222 20:32:26.508567 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0222 20:32:26.508593 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0222 20:32:26.508613 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0222 20:32:26.508633 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0222 20:32:26.508652 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0222 20:32:26.508671 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0222 20:32:26.508694 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0222 20:32:26.508794 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem (1338 bytes)
W0222 20:32:26.508841 6079 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133_empty.pem, impossibly tiny 0 bytes
I0222 20:32:26.508851 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca-key.pem (1675 bytes)
I0222 20:32:26.508891 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/ca.pem (1082 bytes)
I0222 20:32:26.508925 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/cert.pem (1123 bytes)
I0222 20:32:26.508955 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/certs/key.pem (1675 bytes)
I0222 20:32:26.509021 6079 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem (1708 bytes)
I0222 20:32:26.509054 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem -> /usr/share/ca-certificates/31332.pem
I0222 20:32:26.509075 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0222 20:32:26.509092 6079 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem -> /usr/share/ca-certificates/3133.pem
I0222 20:32:26.509595 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0222 20:32:26.528835 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0222 20:32:26.546438 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0222 20:32:26.563996 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/profiles/ingress-addon-legacy-292000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0222 20:32:26.580950 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0222 20:32:26.598172 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0222 20:32:26.615134 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0222 20:32:26.632846 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0222 20:32:26.651086 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/files/etc/ssl/certs/31332.pem --> /usr/share/ca-certificates/31332.pem (1708 bytes)
I0222 20:32:26.669295 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0222 20:32:26.686954 6079 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-2664/.minikube/certs/3133.pem --> /usr/share/ca-certificates/3133.pem (1338 bytes)
I0222 20:32:26.704389 6079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0222 20:32:26.717808 6079 ssh_runner.go:195] Run: openssl version
I0222 20:32:26.723511 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31332.pem && ln -fs /usr/share/ca-certificates/31332.pem /etc/ssl/certs/31332.pem"
I0222 20:32:26.731583 6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31332.pem
I0222 20:32:26.735858 6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:27 /usr/share/ca-certificates/31332.pem
I0222 20:32:26.735912 6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31332.pem
I0222 20:32:26.741381 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31332.pem /etc/ssl/certs/3ec20f2e.0"
I0222 20:32:26.749532 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0222 20:32:26.757585 6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0222 20:32:26.761836 6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:22 /usr/share/ca-certificates/minikubeCA.pem
I0222 20:32:26.761885 6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0222 20:32:26.767463 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0222 20:32:26.775465 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3133.pem && ln -fs /usr/share/ca-certificates/3133.pem /etc/ssl/certs/3133.pem"
I0222 20:32:26.783646 6079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3133.pem
I0222 20:32:26.787912 6079 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:27 /usr/share/ca-certificates/3133.pem
I0222 20:32:26.787957 6079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3133.pem
I0222 20:32:26.793500 6079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3133.pem /etc/ssl/certs/51391683.0"
I0222 20:32:26.801362 6079 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-292000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0222 20:32:26.801505 6079 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0222 20:32:26.821128 6079 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0222 20:32:26.829136 6079 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0222 20:32:26.836786 6079 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0222 20:32:26.836838 6079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0222 20:32:26.844399 6079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0222 20:32:26.844426 6079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0222 20:32:26.893688 6079 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0222 20:32:26.893751 6079 kubeadm.go:322] [preflight] Running pre-flight checks
I0222 20:32:27.062272 6079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0222 20:32:27.062365 6079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0222 20:32:27.062505 6079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0222 20:32:27.218238 6079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0222 20:32:27.218896 6079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0222 20:32:27.218961 6079 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0222 20:32:27.291119 6079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0222 20:32:27.332668 6079 out.go:204] - Generating certificates and keys ...
I0222 20:32:27.332785 6079 kubeadm.go:322] [certs] Using existing ca certificate authority
I0222 20:32:27.332864 6079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0222 20:32:27.622312 6079 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0222 20:32:27.732857 6079 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0222 20:32:27.851448 6079 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0222 20:32:27.938217 6079 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0222 20:32:28.043180 6079 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0222 20:32:28.043389 6079 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0222 20:32:28.223115 6079 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0222 20:32:28.223358 6079 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0222 20:32:28.368045 6079 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0222 20:32:28.466167 6079 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0222 20:32:28.547739 6079 kubeadm.go:322] [certs] Generating "sa" key and public key
I0222 20:32:28.547847 6079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0222 20:32:28.699930 6079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0222 20:32:29.005866 6079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0222 20:32:29.216630 6079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0222 20:32:29.435835 6079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0222 20:32:29.436494 6079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0222 20:32:29.458090 6079 out.go:204] - Booting up control plane ...
I0222 20:32:29.458230 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0222 20:32:29.458359 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0222 20:32:29.458452 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0222 20:32:29.458561 6079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0222 20:32:29.458735 6079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0222 20:33:09.445963 6079 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0222 20:33:09.447038 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:33:09.447244 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:33:14.448612 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:33:14.448830 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:33:24.449747 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:33:24.449939 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:33:44.451161 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:33:44.451383 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:34:24.452063 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:34:24.452340 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:34:24.452363 6079 kubeadm.go:322]
I0222 20:34:24.452414 6079 kubeadm.go:322] Unfortunately, an error has occurred:
I0222 20:34:24.452469 6079 kubeadm.go:322] timed out waiting for the condition
I0222 20:34:24.452475 6079 kubeadm.go:322]
I0222 20:34:24.452531 6079 kubeadm.go:322] This error is likely caused by:
I0222 20:34:24.452573 6079 kubeadm.go:322] - The kubelet is not running
I0222 20:34:24.452715 6079 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0222 20:34:24.452724 6079 kubeadm.go:322]
I0222 20:34:24.452868 6079 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0222 20:34:24.452921 6079 kubeadm.go:322] - 'systemctl status kubelet'
I0222 20:34:24.452978 6079 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0222 20:34:24.452986 6079 kubeadm.go:322]
I0222 20:34:24.453133 6079 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0222 20:34:24.453223 6079 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0222 20:34:24.453237 6079 kubeadm.go:322]
I0222 20:34:24.453334 6079 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0222 20:34:24.453393 6079 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0222 20:34:24.453500 6079 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0222 20:34:24.453539 6079 kubeadm.go:322] - 'docker logs CONTAINERID'
I0222 20:34:24.453548 6079 kubeadm.go:322]
I0222 20:34:24.456324 6079 kubeadm.go:322] W0223 04:32:26.892660 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0222 20:34:24.456478 6079 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0222 20:34:24.456542 6079 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0222 20:34:24.456674 6079 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0222 20:34:24.456763 6079 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0222 20:34:24.456868 6079 kubeadm.go:322] W0223 04:32:29.441635 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0222 20:34:24.456966 6079 kubeadm.go:322] W0223 04:32:29.442545 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0222 20:34:24.457032 6079 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0222 20:34:24.457096 6079 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0222 20:34:24.457315 6079 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:32:26.892660 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:32:29.441635 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:32:29.442545 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-292000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:32:26.892660 1155 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:32:29.441635 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:32:29.442545 1155 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0222 20:34:24.457358 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0222 20:34:24.866960 6079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0222 20:34:24.876575 6079 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0222 20:34:24.876647 6079 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0222 20:34:24.883863 6079 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0222 20:34:24.883884 6079 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0222 20:34:24.930392 6079 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0222 20:34:24.930457 6079 kubeadm.go:322] [preflight] Running pre-flight checks
I0222 20:34:25.091424 6079 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0222 20:34:25.091521 6079 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0222 20:34:25.091612 6079 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0222 20:34:25.240347 6079 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0222 20:34:25.240911 6079 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0222 20:34:25.241119 6079 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0222 20:34:25.317712 6079 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0222 20:34:25.339324 6079 out.go:204] - Generating certificates and keys ...
I0222 20:34:25.339412 6079 kubeadm.go:322] [certs] Using existing ca certificate authority
I0222 20:34:25.339485 6079 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0222 20:34:25.339559 6079 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0222 20:34:25.339632 6079 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0222 20:34:25.339696 6079 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0222 20:34:25.339744 6079 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0222 20:34:25.339812 6079 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0222 20:34:25.339868 6079 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0222 20:34:25.339936 6079 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0222 20:34:25.339998 6079 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0222 20:34:25.340050 6079 kubeadm.go:322] [certs] Using the existing "sa" key
I0222 20:34:25.340114 6079 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0222 20:34:25.610187 6079 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0222 20:34:25.683219 6079 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0222 20:34:25.803435 6079 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0222 20:34:25.896546 6079 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0222 20:34:25.896903 6079 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0222 20:34:25.918633 6079 out.go:204] - Booting up control plane ...
I0222 20:34:25.918750 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0222 20:34:25.918848 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0222 20:34:25.918942 6079 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0222 20:34:25.919051 6079 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0222 20:34:25.919249 6079 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0222 20:35:05.905001 6079 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0222 20:35:05.905862 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:35:05.906032 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:35:10.907045 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:35:10.907276 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:35:20.908013 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:35:20.908155 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:35:40.908898 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:35:40.909068 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:36:20.909016 6079 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0222 20:36:20.909221 6079 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0222 20:36:20.909233 6079 kubeadm.go:322]
I0222 20:36:20.909264 6079 kubeadm.go:322] Unfortunately, an error has occurred:
I0222 20:36:20.909294 6079 kubeadm.go:322] timed out waiting for the condition
I0222 20:36:20.909300 6079 kubeadm.go:322]
I0222 20:36:20.909324 6079 kubeadm.go:322] This error is likely caused by:
I0222 20:36:20.909349 6079 kubeadm.go:322] - The kubelet is not running
I0222 20:36:20.909445 6079 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0222 20:36:20.909458 6079 kubeadm.go:322]
I0222 20:36:20.909569 6079 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0222 20:36:20.909604 6079 kubeadm.go:322] - 'systemctl status kubelet'
I0222 20:36:20.909636 6079 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0222 20:36:20.909647 6079 kubeadm.go:322]
I0222 20:36:20.909726 6079 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0222 20:36:20.909794 6079 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0222 20:36:20.909804 6079 kubeadm.go:322]
I0222 20:36:20.909879 6079 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0222 20:36:20.909922 6079 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0222 20:36:20.909991 6079 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0222 20:36:20.910022 6079 kubeadm.go:322] - 'docker logs CONTAINERID'
I0222 20:36:20.910027 6079 kubeadm.go:322]
I0222 20:36:20.912472 6079 kubeadm.go:322] W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0222 20:36:20.912630 6079 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0222 20:36:20.912718 6079 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0222 20:36:20.912832 6079 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0222 20:36:20.912915 6079 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0222 20:36:20.913015 6079 kubeadm.go:322] W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0222 20:36:20.913115 6079 kubeadm.go:322] W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0222 20:36:20.913182 6079 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0222 20:36:20.913246 6079 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0222 20:36:20.913285 6079 kubeadm.go:403] StartCluster complete in 3m54.114604572s
I0222 20:36:20.913392 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0222 20:36:20.932409 6079 logs.go:278] 0 containers: []
W0222 20:36:20.932423 6079 logs.go:280] No container was found matching "kube-apiserver"
I0222 20:36:20.932497 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0222 20:36:20.952144 6079 logs.go:278] 0 containers: []
W0222 20:36:20.952157 6079 logs.go:280] No container was found matching "etcd"
I0222 20:36:20.952233 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0222 20:36:20.971685 6079 logs.go:278] 0 containers: []
W0222 20:36:20.971697 6079 logs.go:280] No container was found matching "coredns"
I0222 20:36:20.971769 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0222 20:36:20.990595 6079 logs.go:278] 0 containers: []
W0222 20:36:20.990609 6079 logs.go:280] No container was found matching "kube-scheduler"
I0222 20:36:20.990686 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0222 20:36:21.008982 6079 logs.go:278] 0 containers: []
W0222 20:36:21.009003 6079 logs.go:280] No container was found matching "kube-proxy"
I0222 20:36:21.009072 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0222 20:36:21.028480 6079 logs.go:278] 0 containers: []
W0222 20:36:21.028496 6079 logs.go:280] No container was found matching "kube-controller-manager"
I0222 20:36:21.028566 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0222 20:36:21.047367 6079 logs.go:278] 0 containers: []
W0222 20:36:21.047381 6079 logs.go:280] No container was found matching "kindnet"
I0222 20:36:21.047449 6079 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0222 20:36:21.066208 6079 logs.go:278] 0 containers: []
W0222 20:36:21.066221 6079 logs.go:280] No container was found matching "storage-provisioner"
I0222 20:36:21.066228 6079 logs.go:124] Gathering logs for Docker ...
I0222 20:36:21.066235 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0222 20:36:21.092541 6079 logs.go:124] Gathering logs for container status ...
I0222 20:36:21.092554 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0222 20:36:23.136862 6079 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044316675s)
I0222 20:36:23.136986 6079 logs.go:124] Gathering logs for kubelet ...
I0222 20:36:23.136993 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0222 20:36:23.175088 6079 logs.go:124] Gathering logs for dmesg ...
I0222 20:36:23.175103 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0222 20:36:23.187702 6079 logs.go:124] Gathering logs for describe nodes ...
I0222 20:36:23.187714 6079 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0222 20:36:23.241376 6079 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0222 20:36:23.241404 6079 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0222 20:36:23.241421 6079 out.go:239] *
*
W0222 20:36:23.241552 6079 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0222 20:36:23.241566 6079 out.go:239] *
*
W0222 20:36:23.242221 6079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0222 20:36:23.306028 6079 out.go:177]
W0222 20:36:23.349169 6079 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 04:34:24.929893 3563 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 04:34:25.900490 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 04:34:25.901886 3563 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0222 20:36:23.349275 6079 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0222 20:36:23.349379 6079 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0222 20:36:23.371013 6079 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-292000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (259.15s)