=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0216 08:58:01.367929 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:58:29.051251 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/addons-983000/client.crt: no such file or directory
E0216 08:58:59.580701 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.585838 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.595955 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.616782 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.657651 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.739749 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:58:59.901427 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:00.221526 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:00.861682 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:02.141863 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:04.702003 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:09.822025 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:20.062016 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 08:59:40.541849 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:00:21.501892 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
E0216 09:01:43.452599 2151 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/functional-060000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m37.991142177s)
-- stdout --
* [ingress-addon-legacy-502000] minikube v1.32.0 on Darwin 14.3.1
- MINIKUBE_LOCATION=17936
- KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-502000 in cluster ingress-addon-legacy-502000
* Pulling base image v0.0.42-1708008208-17936 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0216 08:57:37.748120 5455 out.go:291] Setting OutFile to fd 1 ...
I0216 08:57:37.748381 5455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:57:37.748388 5455 out.go:304] Setting ErrFile to fd 2...
I0216 08:57:37.748393 5455 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 08:57:37.748583 5455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17936-1021/.minikube/bin
I0216 08:57:37.750249 5455 out.go:298] Setting JSON to false
I0216 08:57:37.773782 5455 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1628,"bootTime":1708101029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0216 08:57:37.773899 5455 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0216 08:57:37.795594 5455 out.go:177] * [ingress-addon-legacy-502000] minikube v1.32.0 on Darwin 14.3.1
I0216 08:57:37.837875 5455 out.go:177] - MINIKUBE_LOCATION=17936
I0216 08:57:37.837994 5455 notify.go:220] Checking for updates...
I0216 08:57:37.859716 5455 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17936-1021/kubeconfig
I0216 08:57:37.880441 5455 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0216 08:57:37.901854 5455 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0216 08:57:37.923627 5455 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17936-1021/.minikube
I0216 08:57:37.944404 5455 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0216 08:57:37.966036 5455 driver.go:392] Setting default libvirt URI to qemu:///system
I0216 08:57:38.022545 5455 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
I0216 08:57:38.022712 5455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0216 08:57:38.129173 5455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-16 16:57:38.118799055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0216 08:57:38.171421 5455 out.go:177] * Using the docker driver based on user configuration
I0216 08:57:38.192486 5455 start.go:299] selected driver: docker
I0216 08:57:38.192504 5455 start.go:903] validating driver "docker" against <nil>
I0216 08:57:38.192517 5455 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0216 08:57:38.196093 5455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0216 08:57:38.303796 5455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-16 16:57:38.294045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cg
roupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for
an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0216 08:57:38.303943 5455 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0216 08:57:38.304115 5455 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0216 08:57:38.325570 5455 out.go:177] * Using Docker Desktop driver with root privileges
I0216 08:57:38.346414 5455 cni.go:84] Creating CNI manager for ""
I0216 08:57:38.346449 5455 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0216 08:57:38.346465 5455 start_flags.go:323] config:
{Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 08:57:38.368618 5455 out.go:177] * Starting control plane node ingress-addon-legacy-502000 in cluster ingress-addon-legacy-502000
I0216 08:57:38.410397 5455 cache.go:121] Beginning downloading kic base image for docker with docker
I0216 08:57:38.431485 5455 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
I0216 08:57:38.474324 5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 08:57:38.474418 5455 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
I0216 08:57:38.525329 5455 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
I0216 08:57:38.525353 5455 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
I0216 08:57:38.729727 5455 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0216 08:57:38.729751 5455 cache.go:56] Caching tarball of preloaded images
I0216 08:57:38.729994 5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 08:57:38.751537 5455 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0216 08:57:38.794354 5455 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 08:57:39.354078 5455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0216 08:57:56.686139 5455 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 08:57:56.686349 5455 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 08:57:57.328634 5455 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0216 08:57:57.328873 5455 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json ...
I0216 08:57:57.328897 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json: {Name:mkcf0f7ad907db6fa82502d38c90f22d7a31a393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:57:57.329647 5455 cache.go:194] Successfully downloaded all kic artifacts
I0216 08:57:57.329679 5455 start.go:365] acquiring machines lock for ingress-addon-legacy-502000: {Name:mkaa184d9ec1a667ce31139c0cb669fd5169a0b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0216 08:57:57.329928 5455 start.go:369] acquired machines lock for "ingress-addon-legacy-502000" in 212.988µs
I0216 08:57:57.329971 5455 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0216 08:57:57.330071 5455 start.go:125] createHost starting for "" (driver="docker")
I0216 08:57:57.362977 5455 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0216 08:57:57.363311 5455 start.go:159] libmachine.API.Create for "ingress-addon-legacy-502000" (driver="docker")
I0216 08:57:57.363363 5455 client.go:168] LocalClient.Create starting
I0216 08:57:57.363952 5455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem
I0216 08:57:57.364386 5455 main.go:141] libmachine: Decoding PEM data...
I0216 08:57:57.364415 5455 main.go:141] libmachine: Parsing certificate...
I0216 08:57:57.364516 5455 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem
I0216 08:57:57.364873 5455 main.go:141] libmachine: Decoding PEM data...
I0216 08:57:57.364889 5455 main.go:141] libmachine: Parsing certificate...
I0216 08:57:57.385100 5455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-502000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0216 08:57:57.438198 5455 cli_runner.go:211] docker network inspect ingress-addon-legacy-502000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0216 08:57:57.438321 5455 network_create.go:281] running [docker network inspect ingress-addon-legacy-502000] to gather additional debugging logs...
I0216 08:57:57.438342 5455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-502000
W0216 08:57:57.490978 5455 cli_runner.go:211] docker network inspect ingress-addon-legacy-502000 returned with exit code 1
I0216 08:57:57.491013 5455 network_create.go:284] error running [docker network inspect ingress-addon-legacy-502000]: docker network inspect ingress-addon-legacy-502000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-502000 not found
I0216 08:57:57.491031 5455 network_create.go:286] output of [docker network inspect ingress-addon-legacy-502000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-502000 not found
** /stderr **
I0216 08:57:57.491179 5455 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0216 08:57:57.543835 5455 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020c0b90}
I0216 08:57:57.543871 5455 network_create.go:124] attempt to create docker network ingress-addon-legacy-502000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0216 08:57:57.543948 5455 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 ingress-addon-legacy-502000
I0216 08:57:57.636009 5455 network_create.go:108] docker network ingress-addon-legacy-502000 192.168.49.0/24 created
I0216 08:57:57.636078 5455 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-502000" container
I0216 08:57:57.636223 5455 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0216 08:57:57.689437 5455 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-502000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --label created_by.minikube.sigs.k8s.io=true
I0216 08:57:57.742591 5455 oci.go:103] Successfully created a docker volume ingress-addon-legacy-502000
I0216 08:57:57.742717 5455 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-502000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --entrypoint /usr/bin/test -v ingress-addon-legacy-502000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
I0216 08:57:58.197156 5455 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-502000
I0216 08:57:58.197190 5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 08:57:58.197202 5455 kic.go:194] Starting extracting preloaded images to volume ...
I0216 08:57:58.197320 5455 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-502000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
I0216 08:58:00.964194 5455 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-502000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.766844965s)
I0216 08:58:00.964225 5455 kic.go:203] duration metric: took 2.767060 seconds to extract preloaded images to volume
I0216 08:58:00.964355 5455 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0216 08:58:01.077477 5455 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-502000 --name ingress-addon-legacy-502000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-502000 --network ingress-addon-legacy-502000 --ip 192.168.49.2 --volume ingress-addon-legacy-502000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
I0216 08:58:01.392926 5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Running}}
I0216 08:58:01.451770 5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
I0216 08:58:01.513261 5455 cli_runner.go:164] Run: docker exec ingress-addon-legacy-502000 stat /var/lib/dpkg/alternatives/iptables
I0216 08:58:01.624531 5455 oci.go:144] the created container "ingress-addon-legacy-502000" has a running status.
I0216 08:58:01.624575 5455 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa...
I0216 08:58:01.696043 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0216 08:58:01.696196 5455 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0216 08:58:01.769109 5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
I0216 08:58:01.829255 5455 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0216 08:58:01.829302 5455 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-502000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0216 08:58:01.950859 5455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-502000 --format={{.State.Status}}
I0216 08:58:02.008225 5455 machine.go:88] provisioning docker machine ...
I0216 08:58:02.008278 5455 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-502000"
I0216 08:58:02.008395 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:02.067425 5455 main.go:141] libmachine: Using SSH client type: native
I0216 08:58:02.067769 5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 50597 <nil> <nil>}
I0216 08:58:02.067786 5455 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-502000 && echo "ingress-addon-legacy-502000" | sudo tee /etc/hostname
I0216 08:58:02.233550 5455 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-502000
I0216 08:58:02.233636 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:02.290088 5455 main.go:141] libmachine: Using SSH client type: native
I0216 08:58:02.290373 5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 50597 <nil> <nil>}
I0216 08:58:02.290388 5455 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-502000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-502000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-502000' | sudo tee -a /etc/hosts;
fi
fi
I0216 08:58:02.430074 5455 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0216 08:58:02.430097 5455 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17936-1021/.minikube CaCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17936-1021/.minikube}
I0216 08:58:02.430117 5455 ubuntu.go:177] setting up certificates
I0216 08:58:02.430125 5455 provision.go:83] configureAuth start
I0216 08:58:02.430182 5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
I0216 08:58:02.486559 5455 provision.go:138] copyHostCerts
I0216 08:58:02.486632 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
I0216 08:58:02.486735 5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem, removing ...
I0216 08:58:02.486742 5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem
I0216 08:58:02.486894 5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.pem (1082 bytes)
I0216 08:58:02.487086 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
I0216 08:58:02.487156 5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem, removing ...
I0216 08:58:02.487161 5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem
I0216 08:58:02.487279 5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/cert.pem (1123 bytes)
I0216 08:58:02.487494 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
I0216 08:58:02.487565 5455 exec_runner.go:144] found /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem, removing ...
I0216 08:58:02.487571 5455 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem
I0216 08:58:02.487716 5455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17936-1021/.minikube/key.pem (1675 bytes)
I0216 08:58:02.488172 5455 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-502000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-502000]
I0216 08:58:02.604837 5455 provision.go:172] copyRemoteCerts
I0216 08:58:02.605120 5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0216 08:58:02.605183 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:02.661178 5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
I0216 08:58:02.765895 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0216 08:58:02.777605 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0216 08:58:02.823382 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem -> /etc/docker/server.pem
I0216 08:58:02.823468 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0216 08:58:02.867130 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0216 08:58:02.867295 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0216 08:58:02.911459 5455 provision.go:86] duration metric: configureAuth took 481.322673ms
I0216 08:58:02.911476 5455 ubuntu.go:193] setting minikube options for container-runtime
I0216 08:58:02.911638 5455 config.go:182] Loaded profile config "ingress-addon-legacy-502000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0216 08:58:02.911715 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:02.967022 5455 main.go:141] libmachine: Using SSH client type: native
I0216 08:58:02.967334 5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 50597 <nil> <nil>}
I0216 08:58:02.967351 5455 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0216 08:58:03.110293 5455 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0216 08:58:03.110310 5455 ubuntu.go:71] root file system type: overlay
I0216 08:58:03.110386 5455 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0216 08:58:03.110469 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:03.218860 5455 main.go:141] libmachine: Using SSH client type: native
I0216 08:58:03.219264 5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 50597 <nil> <nil>}
I0216 08:58:03.219327 5455 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0216 08:58:03.388450 5455 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0216 08:58:03.388666 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:03.443922 5455 main.go:141] libmachine: Using SSH client type: native
I0216 08:58:03.444244 5455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 50597 <nil> <nil>}
I0216 08:58:03.444259 5455 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0216 08:58:04.115332 5455 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-02-06 21:12:51.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-16 16:58:03.383626691 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0216 08:58:04.115358 5455 machine.go:91] provisioned docker machine in 2.107136588s
I0216 08:58:04.115366 5455 client.go:171] LocalClient.Create took 6.752093637s
I0216 08:58:04.115383 5455 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-502000" took 6.75217795s
I0216 08:58:04.115394 5455 start.go:300] post-start starting for "ingress-addon-legacy-502000" (driver="docker")
I0216 08:58:04.115401 5455 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0216 08:58:04.115468 5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0216 08:58:04.115535 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:04.169442 5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
I0216 08:58:04.273350 5455 ssh_runner.go:195] Run: cat /etc/os-release
I0216 08:58:04.277814 5455 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0216 08:58:04.277843 5455 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0216 08:58:04.277851 5455 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0216 08:58:04.277856 5455 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0216 08:58:04.277867 5455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/addons for local assets ...
I0216 08:58:04.277971 5455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17936-1021/.minikube/files for local assets ...
I0216 08:58:04.278415 5455 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> 21512.pem in /etc/ssl/certs
I0216 08:58:04.278423 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> /etc/ssl/certs/21512.pem
I0216 08:58:04.278664 5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0216 08:58:04.295330 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /etc/ssl/certs/21512.pem (1708 bytes)
I0216 08:58:04.338586 5455 start.go:303] post-start completed in 223.186121ms
I0216 08:58:04.339186 5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
I0216 08:58:04.394625 5455 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/config.json ...
I0216 08:58:04.395682 5455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0216 08:58:04.395748 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:04.448006 5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
I0216 08:58:04.541666 5455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0216 08:58:04.547340 5455 start.go:128] duration metric: createHost completed in 7.217361727s
I0216 08:58:04.547355 5455 start.go:83] releasing machines lock for "ingress-addon-legacy-502000", held for 7.217502013s
I0216 08:58:04.547435 5455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-502000
I0216 08:58:04.601820 5455 ssh_runner.go:195] Run: cat /version.json
I0216 08:58:04.601898 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:04.602424 5455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0216 08:58:04.602766 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:04.659083 5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
I0216 08:58:04.659095 5455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50597 SSHKeyPath:/Users/jenkins/minikube-integration/17936-1021/.minikube/machines/ingress-addon-legacy-502000/id_rsa Username:docker}
I0216 08:58:04.866405 5455 ssh_runner.go:195] Run: systemctl --version
I0216 08:58:04.871337 5455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0216 08:58:04.876819 5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0216 08:58:04.921333 5455 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0216 08:58:04.921471 5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0216 08:58:04.954287 5455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0216 08:58:04.985340 5455 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0216 08:58:04.985389 5455 start.go:475] detecting cgroup driver to use...
I0216 08:58:04.985408 5455 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0216 08:58:04.985555 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0216 08:58:05.016154 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0216 08:58:05.034331 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0216 08:58:05.053270 5455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0216 08:58:05.053324 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0216 08:58:05.070568 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0216 08:58:05.088143 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0216 08:58:05.106015 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0216 08:58:05.122714 5455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0216 08:58:05.140723 5455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0216 08:58:05.158387 5455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0216 08:58:05.173464 5455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0216 08:58:05.191352 5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 08:58:05.257009 5455 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0216 08:58:05.353568 5455 start.go:475] detecting cgroup driver to use...
I0216 08:58:05.353589 5455 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0216 08:58:05.353656 5455 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0216 08:58:05.374328 5455 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0216 08:58:05.374407 5455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0216 08:58:05.396963 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0216 08:58:05.427964 5455 ssh_runner.go:195] Run: which cri-dockerd
I0216 08:58:05.433157 5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0216 08:58:05.450919 5455 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0216 08:58:05.485293 5455 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0216 08:58:05.591548 5455 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0216 08:58:05.662182 5455 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0216 08:58:05.662282 5455 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0216 08:58:05.692438 5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 08:58:05.759831 5455 ssh_runner.go:195] Run: sudo systemctl restart docker
I0216 08:58:06.029300 5455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0216 08:58:06.052250 5455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0216 08:58:06.120286 5455 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
I0216 08:58:06.120464 5455 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-502000 dig +short host.docker.internal
I0216 08:58:06.222503 5455 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0216 08:58:06.222965 5455 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0216 08:58:06.227733 5455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0216 08:58:06.247303 5455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-502000
I0216 08:58:06.300395 5455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 08:58:06.300481 5455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0216 08:58:06.318382 5455 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0216 08:58:06.318402 5455 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0216 08:58:06.318468 5455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0216 08:58:06.334311 5455 ssh_runner.go:195] Run: which lz4
I0216 08:58:06.339327 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0216 08:58:06.339964 5455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0216 08:58:06.344359 5455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0216 08:58:06.344380 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0216 08:58:13.462734 5455 docker.go:649] Took 7.123434 seconds to copy over tarball
I0216 08:58:13.462809 5455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0216 08:58:15.244006 5455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.781199765s)
I0216 08:58:15.244027 5455 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0216 08:58:15.300372 5455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0216 08:58:15.315656 5455 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0216 08:58:15.346394 5455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 08:58:15.411323 5455 ssh_runner.go:195] Run: sudo systemctl restart docker
I0216 08:58:16.753285 5455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.341944187s)
I0216 08:58:16.753372 5455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0216 08:58:16.770484 5455 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0216 08:58:16.770498 5455 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0216 08:58:16.770513 5455 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0216 08:58:16.775138 5455 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0216 08:58:16.775266 5455 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 08:58:16.775463 5455 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 08:58:16.775937 5455 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0216 08:58:16.776268 5455 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 08:58:16.776387 5455 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0216 08:58:16.776995 5455 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0216 08:58:16.777101 5455 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0216 08:58:16.780811 5455 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 08:58:16.781618 5455 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0216 08:58:16.781686 5455 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 08:58:16.783313 5455 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0216 08:58:16.783872 5455 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0216 08:58:16.783919 5455 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0216 08:58:16.783938 5455 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 08:58:16.784049 5455 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0216 08:58:18.779701 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0216 08:58:18.797477 5455 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0216 08:58:18.797516 5455 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0216 08:58:18.797576 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0216 08:58:18.814734 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0216 08:58:18.853761 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0216 08:58:18.872260 5455 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0216 08:58:18.872289 5455 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 08:58:18.872353 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0216 08:58:18.889877 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0216 08:58:18.901292 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0216 08:58:18.918480 5455 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0216 08:58:18.918507 5455 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0216 08:58:18.918571 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0216 08:58:18.932040 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0216 08:58:18.932617 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0216 08:58:18.933711 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0216 08:58:18.935865 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0216 08:58:18.944765 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0216 08:58:18.955047 5455 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0216 08:58:18.955073 5455 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 08:58:18.955094 5455 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0216 08:58:18.955110 5455 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0216 08:58:18.955128 5455 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0216 08:58:18.955142 5455 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 08:58:18.955154 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0216 08:58:18.955156 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0216 08:58:18.955199 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0216 08:58:18.969357 5455 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0216 08:58:18.969416 5455 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0216 08:58:18.969535 5455 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0216 08:58:18.991832 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0216 08:58:18.993332 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0216 08:58:18.993352 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0216 08:58:18.998487 5455 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0216 08:58:19.399112 5455 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0216 08:58:19.417822 5455 cache_images.go:92] LoadImages completed in 2.647334221s
W0216 08:58:19.417867 5455 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17936-1021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
I0216 08:58:19.417945 5455 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0216 08:58:19.467564 5455 cni.go:84] Creating CNI manager for ""
I0216 08:58:19.467589 5455 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0216 08:58:19.467608 5455 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0216 08:58:19.467625 5455 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-502000 NodeName:ingress-addon-legacy-502000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0216 08:58:19.467782 5455 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-502000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0216 08:58:19.467857 5455 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-502000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0216 08:58:19.467937 5455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0216 08:58:19.483719 5455 binaries.go:44] Found k8s binaries, skipping transfer
I0216 08:58:19.483771 5455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0216 08:58:19.499674 5455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0216 08:58:19.528495 5455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0216 08:58:19.559535 5455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0216 08:58:19.590356 5455 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0216 08:58:19.594800 5455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0216 08:58:19.612675 5455 certs.go:56] Setting up /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000 for IP: 192.168.49.2
I0216 08:58:19.612732 5455 certs.go:190] acquiring lock for shared ca certs: {Name:mk8795f926ccc5dd497b243df5a2c158b5c5b28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.613268 5455 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key
I0216 08:58:19.613557 5455 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key
I0216 08:58:19.613608 5455 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key
I0216 08:58:19.613623 5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt with IP's: []
I0216 08:58:19.794216 5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt ...
I0216 08:58:19.794231 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.crt: {Name:mk007431836d8995fd7c22de8c14850cae5ca9ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.794554 5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key ...
I0216 08:58:19.794564 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/client.key: {Name:mkeb539da9b3168b95889f91a7453b7d5c2b2e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.794794 5455 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2
I0216 08:58:19.794811 5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0216 08:58:19.846807 5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 ...
I0216 08:58:19.846818 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2: {Name:mk5480c7b30447a8a0f8b617cf7dff4aab9c8c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.847079 5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2 ...
I0216 08:58:19.847087 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2: {Name:mk31590b01b058cbf0eca75dfc306771ef7085cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.847281 5455 certs.go:337] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt
I0216 08:58:19.847484 5455 certs.go:341] copying /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key
I0216 08:58:19.847647 5455 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key
I0216 08:58:19.847660 5455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt with IP's: []
I0216 08:58:19.955161 5455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt ...
I0216 08:58:19.955174 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt: {Name:mk9a5f2a0bdda23065003abebaa4a93798b37f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.955438 5455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key ...
I0216 08:58:19.955447 5455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key: {Name:mk160ac2947238b07d42e0a7d5fdc070ffd4f536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 08:58:19.955644 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0216 08:58:19.955676 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0216 08:58:19.955699 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0216 08:58:19.955717 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0216 08:58:19.955736 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0216 08:58:19.955755 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0216 08:58:19.955772 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0216 08:58:19.955788 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0216 08:58:19.955889 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem (1338 bytes)
W0216 08:58:19.956193 5455 certs.go:433] ignoring /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151_empty.pem, impossibly tiny 0 bytes
I0216 08:58:19.956205 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca-key.pem (1679 bytes)
I0216 08:58:19.956247 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/ca.pem (1082 bytes)
I0216 08:58:19.956285 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/cert.pem (1123 bytes)
I0216 08:58:19.956323 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/certs/key.pem (1675 bytes)
I0216 08:58:19.956415 5455 certs.go:437] found cert: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem (1708 bytes)
I0216 08:58:19.956458 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem -> /usr/share/ca-certificates/21512.pem
I0216 08:58:19.956482 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0216 08:58:19.956499 5455 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem -> /usr/share/ca-certificates/2151.pem
I0216 08:58:19.956956 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0216 08:58:20.002332 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0216 08:58:20.044280 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0216 08:58:20.086480 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/profiles/ingress-addon-legacy-502000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0216 08:58:20.128595 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0216 08:58:20.171059 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0216 08:58:20.214063 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0216 08:58:20.257675 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0216 08:58:20.299688 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/files/etc/ssl/certs/21512.pem --> /usr/share/ca-certificates/21512.pem (1708 bytes)
I0216 08:58:20.344264 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0216 08:58:20.386004 5455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17936-1021/.minikube/certs/2151.pem --> /usr/share/ca-certificates/2151.pem (1338 bytes)
I0216 08:58:20.427242 5455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0216 08:58:20.459629 5455 ssh_runner.go:195] Run: openssl version
I0216 08:58:20.465963 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21512.pem && ln -fs /usr/share/ca-certificates/21512.pem /etc/ssl/certs/21512.pem"
I0216 08:58:20.483280 5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21512.pem
I0216 08:58:20.487929 5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:51 /usr/share/ca-certificates/21512.pem
I0216 08:58:20.487970 5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21512.pem
I0216 08:58:20.494921 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21512.pem /etc/ssl/certs/3ec20f2e.0"
I0216 08:58:20.511663 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0216 08:58:20.527629 5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0216 08:58:20.532429 5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
I0216 08:58:20.532475 5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0216 08:58:20.539939 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0216 08:58:20.556322 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2151.pem && ln -fs /usr/share/ca-certificates/2151.pem /etc/ssl/certs/2151.pem"
I0216 08:58:20.574627 5455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2151.pem
I0216 08:58:20.579388 5455 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:51 /usr/share/ca-certificates/2151.pem
I0216 08:58:20.579431 5455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2151.pem
I0216 08:58:20.586713 5455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2151.pem /etc/ssl/certs/51391683.0"
I0216 08:58:20.603116 5455 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0216 08:58:20.607406 5455 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0216 08:58:20.607450 5455 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-502000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 08:58:20.607547 5455 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0216 08:58:20.624113 5455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0216 08:58:20.640900 5455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0216 08:58:20.656837 5455 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0216 08:58:20.656901 5455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0216 08:58:20.671988 5455 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0216 08:58:20.672014 5455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0216 08:58:20.728634 5455 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0216 08:58:20.728675 5455 kubeadm.go:322] [preflight] Running pre-flight checks
I0216 08:58:20.978518 5455 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0216 08:58:20.978636 5455 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0216 08:58:20.978740 5455 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0216 08:58:21.140287 5455 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0216 08:58:21.141115 5455 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0216 08:58:21.141156 5455 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0216 08:58:21.224519 5455 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0216 08:58:21.267372 5455 out.go:204] - Generating certificates and keys ...
I0216 08:58:21.267475 5455 kubeadm.go:322] [certs] Using existing ca certificate authority
I0216 08:58:21.267559 5455 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0216 08:58:21.286891 5455 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0216 08:58:21.598643 5455 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0216 08:58:21.746290 5455 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0216 08:58:21.888216 5455 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0216 08:58:22.015562 5455 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0216 08:58:22.015788 5455 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0216 08:58:22.106306 5455 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0216 08:58:22.106414 5455 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0216 08:58:22.179275 5455 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0216 08:58:22.241942 5455 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0216 08:58:22.528004 5455 kubeadm.go:322] [certs] Generating "sa" key and public key
I0216 08:58:22.528101 5455 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0216 08:58:22.581010 5455 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0216 08:58:22.695276 5455 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0216 08:58:22.905791 5455 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0216 08:58:23.016194 5455 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0216 08:58:23.017585 5455 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0216 08:58:23.038369 5455 out.go:204] - Booting up control plane ...
I0216 08:58:23.038474 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0216 08:58:23.038551 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0216 08:58:23.038642 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0216 08:58:23.038755 5455 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0216 08:58:23.038944 5455 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0216 08:59:03.027299 5455 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0216 08:59:03.027705 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 08:59:03.027866 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 08:59:08.028602 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 08:59:08.028765 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 08:59:18.029387 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 08:59:18.029549 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 08:59:38.030231 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 08:59:38.030373 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:00:18.030705 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:00:18.030923 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:00:18.030943 5455 kubeadm.go:322]
I0216 09:00:18.030976 5455 kubeadm.go:322] Unfortunately, an error has occurred:
I0216 09:00:18.031029 5455 kubeadm.go:322] timed out waiting for the condition
I0216 09:00:18.031046 5455 kubeadm.go:322]
I0216 09:00:18.031088 5455 kubeadm.go:322] This error is likely caused by:
I0216 09:00:18.031119 5455 kubeadm.go:322] - The kubelet is not running
I0216 09:00:18.031218 5455 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0216 09:00:18.031227 5455 kubeadm.go:322]
I0216 09:00:18.031304 5455 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0216 09:00:18.031352 5455 kubeadm.go:322] - 'systemctl status kubelet'
I0216 09:00:18.031402 5455 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0216 09:00:18.031423 5455 kubeadm.go:322]
I0216 09:00:18.031549 5455 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0216 09:00:18.031644 5455 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0216 09:00:18.031711 5455 kubeadm.go:322]
I0216 09:00:18.031882 5455 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0216 09:00:18.031945 5455 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0216 09:00:18.032054 5455 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0216 09:00:18.032121 5455 kubeadm.go:322] - 'docker logs CONTAINERID'
I0216 09:00:18.032151 5455 kubeadm.go:322]
I0216 09:00:18.036609 5455 kubeadm.go:322] W0216 16:58:20.728027 1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0216 09:00:18.036766 5455 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0216 09:00:18.036851 5455 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0216 09:00:18.036954 5455 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0216 09:00:18.037043 5455 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0216 09:00:18.037150 5455 kubeadm.go:322] W0216 16:58:23.021805 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 09:00:18.037257 5455 kubeadm.go:322] W0216 16:58:23.022723 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 09:00:18.037331 5455 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0216 09:00:18.037398 5455 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0216 09:00:18.037480 5455 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:58:20.728027 1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:58:23.021805 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:58:23.022723 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-502000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:58:20.728027 1772 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:58:23.021805 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:58:23.022723 1772 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0216 09:00:18.037519 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0216 09:00:18.571956 5455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0216 09:00:18.589353 5455 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0216 09:00:18.589404 5455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0216 09:00:18.605150 5455 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0216 09:00:18.605204 5455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0216 09:00:18.672441 5455 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0216 09:00:18.672510 5455 kubeadm.go:322] [preflight] Running pre-flight checks
I0216 09:00:18.922731 5455 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0216 09:00:18.922830 5455 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0216 09:00:18.922919 5455 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0216 09:00:19.119860 5455 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0216 09:00:19.120910 5455 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0216 09:00:19.120948 5455 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0216 09:00:19.205192 5455 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0216 09:00:19.247548 5455 out.go:204] - Generating certificates and keys ...
I0216 09:00:19.247642 5455 kubeadm.go:322] [certs] Using existing ca certificate authority
I0216 09:00:19.247706 5455 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0216 09:00:19.247789 5455 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0216 09:00:19.247864 5455 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0216 09:00:19.247932 5455 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0216 09:00:19.247990 5455 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0216 09:00:19.248066 5455 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0216 09:00:19.248119 5455 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0216 09:00:19.248195 5455 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0216 09:00:19.248259 5455 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0216 09:00:19.248291 5455 kubeadm.go:322] [certs] Using the existing "sa" key
I0216 09:00:19.248340 5455 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0216 09:00:19.472915 5455 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0216 09:00:19.642944 5455 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0216 09:00:19.925706 5455 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0216 09:00:20.124250 5455 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0216 09:00:20.125142 5455 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0216 09:00:20.145545 5455 out.go:204] - Booting up control plane ...
I0216 09:00:20.145724 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0216 09:00:20.145847 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0216 09:00:20.145973 5455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0216 09:00:20.146108 5455 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0216 09:00:20.146368 5455 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0216 09:01:00.145396 5455 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0216 09:01:00.146078 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:01:00.146303 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:01:05.153564 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:01:05.153731 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:01:15.161939 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:01:15.162096 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:01:35.169255 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:01:35.169418 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:02:15.172751 5455 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0216 09:02:15.173057 5455 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0216 09:02:15.173080 5455 kubeadm.go:322]
I0216 09:02:15.173132 5455 kubeadm.go:322] Unfortunately, an error has occurred:
I0216 09:02:15.173172 5455 kubeadm.go:322] timed out waiting for the condition
I0216 09:02:15.173181 5455 kubeadm.go:322]
I0216 09:02:15.173238 5455 kubeadm.go:322] This error is likely caused by:
I0216 09:02:15.173289 5455 kubeadm.go:322] - The kubelet is not running
I0216 09:02:15.173399 5455 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0216 09:02:15.173411 5455 kubeadm.go:322]
I0216 09:02:15.173503 5455 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0216 09:02:15.173530 5455 kubeadm.go:322] - 'systemctl status kubelet'
I0216 09:02:15.173561 5455 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0216 09:02:15.173572 5455 kubeadm.go:322]
I0216 09:02:15.173649 5455 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0216 09:02:15.173721 5455 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0216 09:02:15.173727 5455 kubeadm.go:322]
I0216 09:02:15.173794 5455 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0216 09:02:15.173831 5455 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0216 09:02:15.173893 5455 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0216 09:02:15.173920 5455 kubeadm.go:322] - 'docker logs CONTAINERID'
I0216 09:02:15.173927 5455 kubeadm.go:322]
I0216 09:02:15.178000 5455 kubeadm.go:322] W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0216 09:02:15.178148 5455 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0216 09:02:15.178203 5455 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0216 09:02:15.178312 5455 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0216 09:02:15.178407 5455 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0216 09:02:15.178512 5455 kubeadm.go:322] W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 09:02:15.178618 5455 kubeadm.go:322] W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 09:02:15.178680 5455 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0216 09:02:15.178741 5455 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0216 09:02:15.178769 5455 kubeadm.go:406] StartCluster complete in 3m54.54171177s
I0216 09:02:15.179983 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0216 09:02:15.197604 5455 logs.go:276] 0 containers: []
W0216 09:02:15.197618 5455 logs.go:278] No container was found matching "kube-apiserver"
I0216 09:02:15.197691 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0216 09:02:15.214710 5455 logs.go:276] 0 containers: []
W0216 09:02:15.214723 5455 logs.go:278] No container was found matching "etcd"
I0216 09:02:15.214797 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0216 09:02:15.232954 5455 logs.go:276] 0 containers: []
W0216 09:02:15.232968 5455 logs.go:278] No container was found matching "coredns"
I0216 09:02:15.233041 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0216 09:02:15.250798 5455 logs.go:276] 0 containers: []
W0216 09:02:15.250812 5455 logs.go:278] No container was found matching "kube-scheduler"
I0216 09:02:15.250901 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0216 09:02:15.269104 5455 logs.go:276] 0 containers: []
W0216 09:02:15.269134 5455 logs.go:278] No container was found matching "kube-proxy"
I0216 09:02:15.269226 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0216 09:02:15.286229 5455 logs.go:276] 0 containers: []
W0216 09:02:15.286245 5455 logs.go:278] No container was found matching "kube-controller-manager"
I0216 09:02:15.286305 5455 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0216 09:02:15.303718 5455 logs.go:276] 0 containers: []
W0216 09:02:15.303735 5455 logs.go:278] No container was found matching "kindnet"
I0216 09:02:15.303746 5455 logs.go:123] Gathering logs for describe nodes ...
I0216 09:02:15.303766 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0216 09:02:15.366132 5455 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0216 09:02:15.366144 5455 logs.go:123] Gathering logs for Docker ...
I0216 09:02:15.366152 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0216 09:02:15.389489 5455 logs.go:123] Gathering logs for container status ...
I0216 09:02:15.389507 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0216 09:02:15.456291 5455 logs.go:123] Gathering logs for kubelet ...
I0216 09:02:15.456308 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0216 09:02:15.503191 5455 logs.go:123] Gathering logs for dmesg ...
I0216 09:02:15.503210 5455 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W0216 09:02:15.524513 5455 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 09:02:15.524536 5455 out.go:239] *
*
W0216 09:02:15.524574 5455 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 09:02:15.524590 5455 out.go:239] *
*
W0216 09:02:15.525179 5455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0216 09:02:15.588604 5455 out.go:177]
W0216 09:02:15.630384 5455 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 17:00:18.672180 4774 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 17:00:20.129169 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 17:00:20.129911 4774 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 09:02:15.630464 5455 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0216 09:02:15.630491 5455 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0216 09:02:15.651589 5455 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-502000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (278.04s)