=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0330 08:48:56.164942 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:12.325808 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:18.008142 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.013570 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.025632 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.046144 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.086236 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.167097 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.328673 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.650837 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:19.291817 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:20.572711 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:23.135009 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:28.257436 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:38.498406 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:40.011932 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:58.980676 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:52:39.942374 25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.140497501s)
-- stdout --
* [ingress-addon-legacy-106000] minikube v1.29.0 on Darwin 13.3
- MINIKUBE_LOCATION=16199
- KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-106000 in cluster ingress-addon-legacy-106000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0330 08:48:39.786693 28466 out.go:296] Setting OutFile to fd 1 ...
I0330 08:48:39.786872 28466 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0330 08:48:39.786878 28466 out.go:309] Setting ErrFile to fd 2...
I0330 08:48:39.786882 28466 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0330 08:48:39.787005 28466 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
I0330 08:48:39.788481 28466 out.go:303] Setting JSON to false
I0330 08:48:39.808583 28466 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6487,"bootTime":1680184832,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0330 08:48:39.808678 28466 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0330 08:48:39.830790 28466 out.go:177] * [ingress-addon-legacy-106000] minikube v1.29.0 on Darwin 13.3
I0330 08:48:39.872916 28466 notify.go:220] Checking for updates...
I0330 08:48:39.894650 28466 out.go:177] - MINIKUBE_LOCATION=16199
I0330 08:48:39.915734 28466 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
I0330 08:48:39.936784 28466 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0330 08:48:39.957825 28466 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0330 08:48:39.978723 28466 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
I0330 08:48:39.999766 28466 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0330 08:48:40.020934 28466 driver.go:365] Setting default libvirt URI to qemu:///system
I0330 08:48:40.086348 28466 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
I0330 08:48:40.086469 28466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0330 08:48:40.274261 28466 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:48:40.138873327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0330 08:48:40.296019 28466 out.go:177] * Using the docker driver based on user configuration
I0330 08:48:40.317922 28466 start.go:295] selected driver: docker
I0330 08:48:40.317942 28466 start.go:859] validating driver "docker" against <nil>
I0330 08:48:40.317956 28466 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0330 08:48:40.322130 28466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0330 08:48:40.507740 28466 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:48:40.374493638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0330 08:48:40.507865 28466 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0330 08:48:40.508040 28466 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0330 08:48:40.529793 28466 out.go:177] * Using Docker Desktop driver with root privileges
I0330 08:48:40.551499 28466 cni.go:84] Creating CNI manager for ""
I0330 08:48:40.551537 28466 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0330 08:48:40.551562 28466 start_flags.go:319] config:
{Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0330 08:48:40.594587 28466 out.go:177] * Starting control plane node ingress-addon-legacy-106000 in cluster ingress-addon-legacy-106000
I0330 08:48:40.616490 28466 cache.go:120] Beginning downloading kic base image for docker with docker
I0330 08:48:40.637507 28466 out.go:177] * Pulling base image ...
I0330 08:48:40.679410 28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0330 08:48:40.679458 28466 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
I0330 08:48:40.743679 28466 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
I0330 08:48:40.743700 28466 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
I0330 08:48:40.780580 28466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0330 08:48:40.780606 28466 cache.go:57] Caching tarball of preloaded images
I0330 08:48:40.781013 28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0330 08:48:40.802615 28466 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0330 08:48:40.844438 28466 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0330 08:48:41.044896 28466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0330 08:49:04.629866 28466 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0330 08:49:04.630051 28466 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0330 08:49:05.247918 28466 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0330 08:49:05.248274 28466 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json ...
I0330 08:49:05.248301 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json: {Name:mkf52b5e721448e731f6e88518122ed38f5b2097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:05.248634 28466 cache.go:193] Successfully downloaded all kic artifacts
I0330 08:49:05.248659 28466 start.go:364] acquiring machines lock for ingress-addon-legacy-106000: {Name:mka03da6851c44848a95a9e100f1a914957cd2eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0330 08:49:05.248832 28466 start.go:368] acquired machines lock for "ingress-addon-legacy-106000" in 152.226µs
I0330 08:49:05.248881 28466 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0330 08:49:05.248948 28466 start.go:125] createHost starting for "" (driver="docker")
I0330 08:49:05.270314 28466 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0330 08:49:05.270664 28466 start.go:159] libmachine.API.Create for "ingress-addon-legacy-106000" (driver="docker")
I0330 08:49:05.270718 28466 client.go:168] LocalClient.Create starting
I0330 08:49:05.270912 28466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem
I0330 08:49:05.270987 28466 main.go:141] libmachine: Decoding PEM data...
I0330 08:49:05.271018 28466 main.go:141] libmachine: Parsing certificate...
I0330 08:49:05.271146 28466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem
I0330 08:49:05.271197 28466 main.go:141] libmachine: Decoding PEM data...
I0330 08:49:05.271212 28466 main.go:141] libmachine: Parsing certificate...
I0330 08:49:05.292489 28466 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-106000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0330 08:49:05.354332 28466 cli_runner.go:211] docker network inspect ingress-addon-legacy-106000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0330 08:49:05.354466 28466 network_create.go:281] running [docker network inspect ingress-addon-legacy-106000] to gather additional debugging logs...
I0330 08:49:05.354484 28466 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-106000
W0330 08:49:05.411956 28466 cli_runner.go:211] docker network inspect ingress-addon-legacy-106000 returned with exit code 1
I0330 08:49:05.411983 28466 network_create.go:284] error running [docker network inspect ingress-addon-legacy-106000]: docker network inspect ingress-addon-legacy-106000: exit status 1
stdout:
[]
stderr:
Error: No such network: ingress-addon-legacy-106000
I0330 08:49:05.412004 28466 network_create.go:286] output of [docker network inspect ingress-addon-legacy-106000]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: ingress-addon-legacy-106000
** /stderr **
I0330 08:49:05.412090 28466 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0330 08:49:05.469923 28466 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007dca70}
I0330 08:49:05.469958 28466 network_create.go:123] attempt to create docker network ingress-addon-legacy-106000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0330 08:49:05.470039 28466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 ingress-addon-legacy-106000
I0330 08:49:05.560387 28466 network_create.go:107] docker network ingress-addon-legacy-106000 192.168.49.0/24 created
I0330 08:49:05.560420 28466 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-106000" container
I0330 08:49:05.560545 28466 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0330 08:49:05.618687 28466 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-106000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --label created_by.minikube.sigs.k8s.io=true
I0330 08:49:05.678238 28466 oci.go:103] Successfully created a docker volume ingress-addon-legacy-106000
I0330 08:49:05.678375 28466 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-106000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --entrypoint /usr/bin/test -v ingress-addon-legacy-106000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
I0330 08:49:06.149527 28466 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-106000
I0330 08:49:06.149558 28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0330 08:49:06.149573 28466 kic.go:190] Starting extracting preloaded images to volume ...
I0330 08:49:06.149706 28466 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-106000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
I0330 08:49:12.426694 28466 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-106000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (6.27671951s)
I0330 08:49:12.426720 28466 kic.go:199] duration metric: took 6.276966 seconds to extract preloaded images to volume
I0330 08:49:12.426836 28466 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0330 08:49:12.613296 28466 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-106000 --name ingress-addon-legacy-106000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --network ingress-addon-legacy-106000 --ip 192.168.49.2 --volume ingress-addon-legacy-106000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
I0330 08:49:12.980178 28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Running}}
I0330 08:49:13.045172 28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
I0330 08:49:13.109790 28466 cli_runner.go:164] Run: docker exec ingress-addon-legacy-106000 stat /var/lib/dpkg/alternatives/iptables
I0330 08:49:13.231810 28466 oci.go:144] the created container "ingress-addon-legacy-106000" has a running status.
I0330 08:49:13.231852 28466 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa...
I0330 08:49:13.439580 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0330 08:49:13.439656 28466 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0330 08:49:13.544776 28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
I0330 08:49:13.606523 28466 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0330 08:49:13.606544 28466 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-106000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0330 08:49:13.718426 28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
I0330 08:49:13.778072 28466 machine.go:88] provisioning docker machine ...
I0330 08:49:13.778112 28466 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-106000"
I0330 08:49:13.778222 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:13.838678 28466 main.go:141] libmachine: Using SSH client type: native
I0330 08:49:13.839055 28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil> [] 0s} 127.0.0.1 56053 <nil> <nil>}
I0330 08:49:13.839071 28466 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-106000 && echo "ingress-addon-legacy-106000" | sudo tee /etc/hostname
I0330 08:49:13.966708 28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-106000
I0330 08:49:13.966792 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:14.027896 28466 main.go:141] libmachine: Using SSH client type: native
I0330 08:49:14.028242 28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil> [] 0s} 127.0.0.1 56053 <nil> <nil>}
I0330 08:49:14.028263 28466 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-106000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-106000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-106000' | sudo tee -a /etc/hosts;
fi
fi
I0330 08:49:14.146671 28466 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0330 08:49:14.146699 28466 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
I0330 08:49:14.146717 28466 ubuntu.go:177] setting up certificates
I0330 08:49:14.146725 28466 provision.go:83] configureAuth start
I0330 08:49:14.146805 28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
I0330 08:49:14.207343 28466 provision.go:138] copyHostCerts
I0330 08:49:14.207387 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
I0330 08:49:14.207449 28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
I0330 08:49:14.207457 28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
I0330 08:49:14.207578 28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
I0330 08:49:14.207767 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
I0330 08:49:14.207805 28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
I0330 08:49:14.207810 28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
I0330 08:49:14.207871 28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
I0330 08:49:14.207988 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
I0330 08:49:14.208028 28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
I0330 08:49:14.208033 28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
I0330 08:49:14.208088 28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
I0330 08:49:14.208199 28466 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-106000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-106000]
I0330 08:49:14.260169 28466 provision.go:172] copyRemoteCerts
I0330 08:49:14.260234 28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0330 08:49:14.260284 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:14.320707 28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
I0330 08:49:14.407743 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0330 08:49:14.407821 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0330 08:49:14.425011 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem -> /etc/docker/server.pem
I0330 08:49:14.425082 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0330 08:49:14.442276 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0330 08:49:14.442339 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0330 08:49:14.460460 28466 provision.go:86] duration metric: configureAuth took 313.71574ms
I0330 08:49:14.460474 28466 ubuntu.go:193] setting minikube options for container-runtime
I0330 08:49:14.460628 28466 config.go:182] Loaded profile config "ingress-addon-legacy-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0330 08:49:14.460695 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:14.521207 28466 main.go:141] libmachine: Using SSH client type: native
I0330 08:49:14.521561 28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil> [] 0s} 127.0.0.1 56053 <nil> <nil>}
I0330 08:49:14.521578 28466 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0330 08:49:14.637715 28466 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0330 08:49:14.637729 28466 ubuntu.go:71] root file system type: overlay
I0330 08:49:14.637833 28466 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0330 08:49:14.637920 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:14.697377 28466 main.go:141] libmachine: Using SSH client type: native
I0330 08:49:14.697717 28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil> [] 0s} 127.0.0.1 56053 <nil> <nil>}
I0330 08:49:14.697770 28466 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0330 08:49:14.825084 28466 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0330 08:49:14.825202 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:14.885148 28466 main.go:141] libmachine: Using SSH client type: native
I0330 08:49:14.885501 28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil> [] 0s} 127.0.0.1 56053 <nil> <nil>}
I0330 08:49:14.885516 28466 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0330 08:49:15.490390 28466 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-03-30 15:49:14.822905858 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0330 08:49:15.490422 28466 machine.go:91] provisioned docker machine in 1.712275551s
I0330 08:49:15.490432 28466 client.go:171] LocalClient.Create took 10.219411101s
I0330 08:49:15.490452 28466 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-106000" took 10.219494346s
I0330 08:49:15.490462 28466 start.go:300] post-start starting for "ingress-addon-legacy-106000" (driver="docker")
I0330 08:49:15.490468 28466 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0330 08:49:15.490547 28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0330 08:49:15.490611 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:15.554184 28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
I0330 08:49:15.642991 28466 ssh_runner.go:195] Run: cat /etc/os-release
I0330 08:49:15.646692 28466 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0330 08:49:15.646715 28466 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0330 08:49:15.646725 28466 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0330 08:49:15.646729 28466 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0330 08:49:15.646738 28466 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
I0330 08:49:15.646835 28466 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
I0330 08:49:15.646999 28466 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
I0330 08:49:15.647006 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> /etc/ssl/certs/254482.pem
I0330 08:49:15.647200 28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0330 08:49:15.654630 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
I0330 08:49:15.672099 28466 start.go:303] post-start completed in 181.621912ms
I0330 08:49:15.672687 28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
I0330 08:49:15.738379 28466 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json ...
I0330 08:49:15.738826 28466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0330 08:49:15.738896 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:15.798923 28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
I0330 08:49:15.882436 28466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0330 08:49:15.887204 28466 start.go:128] duration metric: createHost completed in 10.637938553s
I0330 08:49:15.887223 28466 start.go:83] releasing machines lock for "ingress-addon-legacy-106000", held for 10.638076888s
I0330 08:49:15.887328 28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
I0330 08:49:15.947076 28466 ssh_runner.go:195] Run: cat /version.json
I0330 08:49:15.947128 28466 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0330 08:49:15.947144 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:15.947202 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:16.014459 28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
I0330 08:49:16.016117 28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
I0330 08:49:16.360511 28466 ssh_runner.go:195] Run: systemctl --version
I0330 08:49:16.365485 28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0330 08:49:16.370624 28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0330 08:49:16.391231 28466 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0330 08:49:16.391306 28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0330 08:49:16.405223 28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0330 08:49:16.412829 28466 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0330 08:49:16.412847 28466 start.go:481] detecting cgroup driver to use...
I0330 08:49:16.412859 28466 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0330 08:49:16.412937 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0330 08:49:16.426441 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
I0330 08:49:16.434973 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0330 08:49:16.443319 28466 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0330 08:49:16.443374 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0330 08:49:16.451912 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0330 08:49:16.460432 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0330 08:49:16.468990 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0330 08:49:16.477375 28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0330 08:49:16.485281 28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0330 08:49:16.493870 28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0330 08:49:16.500972 28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0330 08:49:16.508081 28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0330 08:49:16.569484 28466 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0330 08:49:16.644180 28466 start.go:481] detecting cgroup driver to use...
I0330 08:49:16.644207 28466 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0330 08:49:16.644275 28466 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0330 08:49:16.654914 28466 cruntime.go:276] skipping containerd shutdown because we are bound to it
I0330 08:49:16.654982 28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0330 08:49:16.665258 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0330 08:49:16.680176 28466 ssh_runner.go:195] Run: which cri-dockerd
I0330 08:49:16.684407 28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0330 08:49:16.693161 28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (184 bytes)
I0330 08:49:16.707940 28466 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0330 08:49:16.799529 28466 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0330 08:49:16.891166 28466 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0330 08:49:16.891185 28466 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0330 08:49:16.904686 28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0330 08:49:16.996298 28466 ssh_runner.go:195] Run: sudo systemctl restart docker
I0330 08:49:17.214928 28466 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0330 08:49:17.241741 28466 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0330 08:49:17.314286 28466 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
I0330 08:49:17.314500 28466 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-106000 dig +short host.docker.internal
I0330 08:49:17.433383 28466 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0330 08:49:17.433501 28466 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0330 08:49:17.437904 28466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0330 08:49:17.448144 28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
I0330 08:49:17.510182 28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0330 08:49:17.510279 28466 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0330 08:49:17.530353 28466 docker.go:639] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0330 08:49:17.530373 28466 docker.go:569] Images already preloaded, skipping extraction
I0330 08:49:17.530466 28466 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0330 08:49:17.551435 28466 docker.go:639] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0330 08:49:17.551452 28466 cache_images.go:84] Images are preloaded, skipping loading
I0330 08:49:17.551529 28466 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0330 08:49:17.579226 28466 cni.go:84] Creating CNI manager for ""
I0330 08:49:17.579248 28466 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0330 08:49:17.579266 28466 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0330 08:49:17.579289 28466 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-106000 NodeName:ingress-addon-legacy-106000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0330 08:49:17.579400 28466 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-106000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0330 08:49:17.579472 28466 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-106000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0330 08:49:17.579544 28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0330 08:49:17.587521 28466 binaries.go:44] Found k8s binaries, skipping transfer
I0330 08:49:17.587590 28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0330 08:49:17.595146 28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0330 08:49:17.608223 28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0330 08:49:17.621105 28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0330 08:49:17.634563 28466 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0330 08:49:17.638389 28466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0330 08:49:17.648427 28466 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000 for IP: 192.168.49.2
I0330 08:49:17.648445 28466 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:17.648615 28466 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
I0330 08:49:17.648692 28466 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
I0330 08:49:17.648733 28466 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key
I0330 08:49:17.648746 28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt with IP's: []
I0330 08:49:17.714390 28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt ...
I0330 08:49:17.714400 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt: {Name:mk06cae9dd57d2f59864f4f73d73ab5c187b7451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:17.714705 28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key ...
I0330 08:49:17.714714 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key: {Name:mk763d7d2d2054d57c17008ee420bc5c87b1e530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:17.714918 28466 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2
I0330 08:49:17.714934 28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0330 08:49:18.017594 28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 ...
I0330 08:49:18.017605 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2: {Name:mkaad4b5c2d1334472c6b9d39cd0c7762374ff65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:18.017907 28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2 ...
I0330 08:49:18.017920 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2: {Name:mk681b41f39aff6e4c66737c50b1786379bffbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:18.018188 28466 certs.go:333] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt
I0330 08:49:18.018426 28466 certs.go:337] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key
I0330 08:49:18.018629 28466 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key
I0330 08:49:18.018643 28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt with IP's: []
I0330 08:49:18.244020 28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt ...
I0330 08:49:18.244033 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt: {Name:mk2a66ffa942d3d99e7bcc78026c98562ae3512d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:18.244317 28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key ...
I0330 08:49:18.244328 28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key: {Name:mk2f56053e25ffd6ac1b7edebdc05af692d48cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0330 08:49:18.244554 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0330 08:49:18.244582 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0330 08:49:18.244601 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0330 08:49:18.244675 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0330 08:49:18.244726 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0330 08:49:18.244746 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0330 08:49:18.244762 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0330 08:49:18.244779 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0330 08:49:18.244901 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
W0330 08:49:18.244949 28466 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
I0330 08:49:18.244963 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
I0330 08:49:18.244993 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
I0330 08:49:18.245021 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
I0330 08:49:18.245056 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
I0330 08:49:18.245128 28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
I0330 08:49:18.245166 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> /usr/share/ca-certificates/254482.pem
I0330 08:49:18.245186 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0330 08:49:18.245207 28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem -> /usr/share/ca-certificates/25448.pem
I0330 08:49:18.245692 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0330 08:49:18.264194 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0330 08:49:18.281662 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0330 08:49:18.298835 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0330 08:49:18.316376 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0330 08:49:18.333599 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0330 08:49:18.351119 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0330 08:49:18.368426 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0330 08:49:18.385791 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
I0330 08:49:18.403713 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0330 08:49:18.421103 28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
I0330 08:49:18.438536 28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0330 08:49:18.451759 28466 ssh_runner.go:195] Run: openssl version
I0330 08:49:18.457377 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
I0330 08:49:18.465705 28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
I0330 08:49:18.469808 28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
I0330 08:49:18.469857 28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
I0330 08:49:18.475645 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
I0330 08:49:18.483787 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0330 08:49:18.491788 28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0330 08:49:18.495936 28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
I0330 08:49:18.495988 28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0330 08:49:18.501472 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0330 08:49:18.509911 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
I0330 08:49:18.518139 28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
I0330 08:49:18.522121 28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
I0330 08:49:18.522200 28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
I0330 08:49:18.527515 28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
I0330 08:49:18.535618 28466 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0330 08:49:18.535733 28466 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0330 08:49:18.555057 28466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0330 08:49:18.563233 28466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0330 08:49:18.570783 28466 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0330 08:49:18.570830 28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0330 08:49:18.578323 28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0330 08:49:18.578361 28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0330 08:49:18.627344 28466 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0330 08:49:18.627389 28466 kubeadm.go:322] [preflight] Running pre-flight checks
I0330 08:49:18.797926 28466 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0330 08:49:18.798010 28466 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0330 08:49:18.798087 28466 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0330 08:49:18.951704 28466 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0330 08:49:18.952178 28466 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0330 08:49:18.952218 28466 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0330 08:49:19.026632 28466 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0330 08:49:19.067749 28466 out.go:204] - Generating certificates and keys ...
I0330 08:49:19.067834 28466 kubeadm.go:322] [certs] Using existing ca certificate authority
I0330 08:49:19.067939 28466 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0330 08:49:19.150687 28466 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0330 08:49:19.363611 28466 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0330 08:49:19.642874 28466 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0330 08:49:19.752741 28466 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0330 08:49:19.846785 28466 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0330 08:49:19.846901 28466 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0330 08:49:19.949666 28466 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0330 08:49:19.949774 28466 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0330 08:49:20.446662 28466 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0330 08:49:20.477859 28466 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0330 08:49:20.674987 28466 kubeadm.go:322] [certs] Generating "sa" key and public key
I0330 08:49:20.675076 28466 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0330 08:49:21.040673 28466 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0330 08:49:21.235530 28466 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0330 08:49:21.383755 28466 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0330 08:49:21.699969 28466 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0330 08:49:21.700526 28466 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0330 08:49:21.722083 28466 out.go:204] - Booting up control plane ...
I0330 08:49:21.722207 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0330 08:49:21.722272 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0330 08:49:21.722332 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0330 08:49:21.722393 28466 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0330 08:49:21.722523 28466 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0330 08:50:01.710950 28466 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0330 08:50:01.712009 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:50:01.712230 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:50:06.713412 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:50:06.713614 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:50:16.715741 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:50:16.715969 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:50:36.718311 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:50:36.718515 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:51:16.721574 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:51:16.721931 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:51:16.721956 28466 kubeadm.go:322]
I0330 08:51:16.722008 28466 kubeadm.go:322] Unfortunately, an error has occurred:
I0330 08:51:16.722056 28466 kubeadm.go:322] timed out waiting for the condition
I0330 08:51:16.722066 28466 kubeadm.go:322]
I0330 08:51:16.722113 28466 kubeadm.go:322] This error is likely caused by:
I0330 08:51:16.722188 28466 kubeadm.go:322] - The kubelet is not running
I0330 08:51:16.722335 28466 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0330 08:51:16.722356 28466 kubeadm.go:322]
I0330 08:51:16.722476 28466 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0330 08:51:16.722517 28466 kubeadm.go:322] - 'systemctl status kubelet'
I0330 08:51:16.722561 28466 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0330 08:51:16.722567 28466 kubeadm.go:322]
I0330 08:51:16.722709 28466 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0330 08:51:16.722831 28466 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0330 08:51:16.722854 28466 kubeadm.go:322]
I0330 08:51:16.722951 28466 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0330 08:51:16.723016 28466 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0330 08:51:16.723113 28466 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0330 08:51:16.723146 28466 kubeadm.go:322] - 'docker logs CONTAINERID'
I0330 08:51:16.723151 28466 kubeadm.go:322]
I0330 08:51:16.726658 28466 kubeadm.go:322] W0330 15:49:18.626128 1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0330 08:51:16.726829 28466 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0330 08:51:16.726902 28466 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0330 08:51:16.727014 28466 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0330 08:51:16.727103 28466 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0330 08:51:16.727191 28466 kubeadm.go:322] W0330 15:49:21.704366 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0330 08:51:16.727303 28466 kubeadm.go:322] W0330 15:49:21.705180 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0330 08:51:16.727364 28466 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0330 08:51:16.727428 28466 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0330 08:51:16.727721 28466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:49:18.626128 1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:49:21.704366 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:49:21.705180 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:49:18.626128 1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:49:21.704366 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:49:21.705180 1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0330 08:51:16.727765 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0330 08:51:17.142218 28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0330 08:51:17.152019 28466 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0330 08:51:17.152076 28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0330 08:51:17.159646 28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0330 08:51:17.159667 28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0330 08:51:17.208398 28466 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0330 08:51:17.208449 28466 kubeadm.go:322] [preflight] Running pre-flight checks
I0330 08:51:17.376078 28466 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0330 08:51:17.376200 28466 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0330 08:51:17.376275 28466 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0330 08:51:17.531937 28466 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0330 08:51:17.532396 28466 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0330 08:51:17.532671 28466 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0330 08:51:17.610544 28466 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0330 08:51:17.632150 28466 out.go:204] - Generating certificates and keys ...
I0330 08:51:17.632224 28466 kubeadm.go:322] [certs] Using existing ca certificate authority
I0330 08:51:17.632321 28466 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0330 08:51:17.632415 28466 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0330 08:51:17.632482 28466 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0330 08:51:17.632551 28466 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0330 08:51:17.632624 28466 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0330 08:51:17.632688 28466 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0330 08:51:17.632742 28466 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0330 08:51:17.632828 28466 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0330 08:51:17.632903 28466 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0330 08:51:17.632940 28466 kubeadm.go:322] [certs] Using the existing "sa" key
I0330 08:51:17.632987 28466 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0330 08:51:17.821306 28466 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0330 08:51:17.912641 28466 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0330 08:51:18.032469 28466 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0330 08:51:18.336469 28466 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0330 08:51:18.337022 28466 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0330 08:51:18.358638 28466 out.go:204] - Booting up control plane ...
I0330 08:51:18.358817 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0330 08:51:18.359019 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0330 08:51:18.359159 28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0330 08:51:18.359374 28466 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0330 08:51:18.359675 28466 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0330 08:51:58.347328 28466 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0330 08:51:58.348328 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:51:58.348551 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:52:03.349774 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:52:03.349947 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:52:13.352032 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:52:13.352274 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:52:33.354600 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:52:33.354825 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:53:13.357584 28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0330 08:53:13.357815 28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0330 08:53:13.357827 28466 kubeadm.go:322]
I0330 08:53:13.357902 28466 kubeadm.go:322] Unfortunately, an error has occurred:
I0330 08:53:13.357960 28466 kubeadm.go:322] timed out waiting for the condition
I0330 08:53:13.357969 28466 kubeadm.go:322]
I0330 08:53:13.358024 28466 kubeadm.go:322] This error is likely caused by:
I0330 08:53:13.358073 28466 kubeadm.go:322] - The kubelet is not running
I0330 08:53:13.358194 28466 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0330 08:53:13.358209 28466 kubeadm.go:322]
I0330 08:53:13.358324 28466 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0330 08:53:13.358380 28466 kubeadm.go:322] - 'systemctl status kubelet'
I0330 08:53:13.358414 28466 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0330 08:53:13.358419 28466 kubeadm.go:322]
I0330 08:53:13.358541 28466 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0330 08:53:13.358630 28466 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0330 08:53:13.358638 28466 kubeadm.go:322]
I0330 08:53:13.358747 28466 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0330 08:53:13.358803 28466 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0330 08:53:13.358884 28466 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0330 08:53:13.358936 28466 kubeadm.go:322] - 'docker logs CONTAINERID'
I0330 08:53:13.358943 28466 kubeadm.go:322]
I0330 08:53:13.361707 28466 kubeadm.go:322] W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0330 08:53:13.361861 28466 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0330 08:53:13.361936 28466 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0330 08:53:13.362056 28466 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
I0330 08:53:13.362151 28466 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0330 08:53:13.362255 28466 kubeadm.go:322] W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0330 08:53:13.362359 28466 kubeadm.go:322] W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0330 08:53:13.362433 28466 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0330 08:53:13.362491 28466 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0330 08:53:13.362535 28466 kubeadm.go:403] StartCluster complete in 3m54.820130526s
I0330 08:53:13.362641 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0330 08:53:13.381700 28466 logs.go:277] 0 containers: []
W0330 08:53:13.381713 28466 logs.go:279] No container was found matching "kube-apiserver"
I0330 08:53:13.381790 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0330 08:53:13.401224 28466 logs.go:277] 0 containers: []
W0330 08:53:13.401236 28466 logs.go:279] No container was found matching "etcd"
I0330 08:53:13.401303 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0330 08:53:13.421383 28466 logs.go:277] 0 containers: []
W0330 08:53:13.421397 28466 logs.go:279] No container was found matching "coredns"
I0330 08:53:13.421467 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0330 08:53:13.441098 28466 logs.go:277] 0 containers: []
W0330 08:53:13.441111 28466 logs.go:279] No container was found matching "kube-scheduler"
I0330 08:53:13.441189 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0330 08:53:13.460470 28466 logs.go:277] 0 containers: []
W0330 08:53:13.460484 28466 logs.go:279] No container was found matching "kube-proxy"
I0330 08:53:13.460550 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0330 08:53:13.479919 28466 logs.go:277] 0 containers: []
W0330 08:53:13.479932 28466 logs.go:279] No container was found matching "kube-controller-manager"
I0330 08:53:13.480000 28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0330 08:53:13.499065 28466 logs.go:277] 0 containers: []
W0330 08:53:13.499080 28466 logs.go:279] No container was found matching "kindnet"
I0330 08:53:13.499087 28466 logs.go:123] Gathering logs for kubelet ...
I0330 08:53:13.499102 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0330 08:53:13.537005 28466 logs.go:123] Gathering logs for dmesg ...
I0330 08:53:13.537018 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0330 08:53:13.550268 28466 logs.go:123] Gathering logs for describe nodes ...
I0330 08:53:13.550281 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0330 08:53:13.605935 28466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0330 08:53:13.605948 28466 logs.go:123] Gathering logs for Docker ...
I0330 08:53:13.605959 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0330 08:53:13.630203 28466 logs.go:123] Gathering logs for container status ...
I0330 08:53:13.630221 28466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0330 08:53:15.681227 28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050934887s)
W0330 08:53:15.681353 28466 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0330 08:53:15.681371 28466 out.go:239] *
*
W0330 08:53:15.681499 28466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0330 08:53:15.681515 28466 out.go:239] *
*
W0330 08:53:15.682163 28466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0330 08:53:15.745521 28466 out.go:177]
W0330 08:53:15.788074 28466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0330 15:51:17.207080 3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0330 15:51:18.340722 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0330 15:51:18.341529 3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0330 08:53:15.788237 28466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0330 08:53:15.788317 28466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0330 08:53:15.809691 28466 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.18s)