=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0103 12:04:39.355774 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:04:58.342293 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.352492 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.362771 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.384866 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.425099 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.507231 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.667727 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:58.988221 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:04:59.630064 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:00.910409 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:03.472618 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:07.040991 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/addons-927000/client.crt: no such file or directory
E0103 12:05:08.594706 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:18.835746 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:05:39.315456 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0103 12:06:20.274639 11090 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/functional-307000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.083305452s)
-- stdout --
* [ingress-addon-legacy-996000] minikube v1.32.0 on Darwin 14.2
- MINIKUBE_LOCATION=17885
- KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-996000 in cluster ingress-addon-legacy-996000
* Pulling base image v0.0.42-1703498848-17857 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0103 12:02:28.804728 14062 out.go:296] Setting OutFile to fd 1 ...
I0103 12:02:28.804947 14062 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:02:28.804952 14062 out.go:309] Setting ErrFile to fd 2...
I0103 12:02:28.804956 14062 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 12:02:28.805144 14062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17885-10646/.minikube/bin
I0103 12:02:28.806610 14062 out.go:303] Setting JSON to false
I0103 12:02:28.829133 14062 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5518,"bootTime":1704306630,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0103 12:02:28.829228 14062 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0103 12:02:28.850878 14062 out.go:177] * [ingress-addon-legacy-996000] minikube v1.32.0 on Darwin 14.2
I0103 12:02:28.908423 14062 out.go:177] - MINIKUBE_LOCATION=17885
I0103 12:02:28.908522 14062 notify.go:220] Checking for updates...
I0103 12:02:28.929553 14062 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17885-10646/kubeconfig
I0103 12:02:28.952648 14062 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0103 12:02:28.973267 14062 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0103 12:02:28.994519 14062 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17885-10646/.minikube
I0103 12:02:29.036391 14062 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0103 12:02:29.058098 14062 driver.go:392] Setting default libvirt URI to qemu:///system
I0103 12:02:29.115827 14062 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
I0103 12:02:29.115980 14062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0103 12:02:29.215750 14062 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-03 20:02:29.206510638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0103 12:02:29.237400 14062 out.go:177] * Using the docker driver based on user configuration
I0103 12:02:29.258963 14062 start.go:298] selected driver: docker
I0103 12:02:29.258993 14062 start.go:902] validating driver "docker" against <nil>
I0103 12:02:29.259007 14062 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0103 12:02:29.263436 14062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0103 12:02:29.364874 14062 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-03 20:02:29.356293842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0103 12:02:29.365059 14062 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0103 12:02:29.365246 14062 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0103 12:02:29.386558 14062 out.go:177] * Using Docker Desktop driver with root privileges
I0103 12:02:29.407629 14062 cni.go:84] Creating CNI manager for ""
I0103 12:02:29.407673 14062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0103 12:02:29.407693 14062 start_flags.go:323] config:
{Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0103 12:02:29.450500 14062 out.go:177] * Starting control plane node ingress-addon-legacy-996000 in cluster ingress-addon-legacy-996000
I0103 12:02:29.471600 14062 cache.go:121] Beginning downloading kic base image for docker with docker
I0103 12:02:29.494429 14062 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
I0103 12:02:29.536398 14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0103 12:02:29.536496 14062 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
I0103 12:02:29.591211 14062 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
I0103 12:02:29.591238 14062 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
I0103 12:02:29.595294 14062 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0103 12:02:29.595307 14062 cache.go:56] Caching tarball of preloaded images
I0103 12:02:29.595492 14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0103 12:02:29.616503 14062 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0103 12:02:29.659309 14062 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0103 12:02:29.744685 14062 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0103 12:02:36.121197 14062 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0103 12:02:36.121372 14062 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0103 12:02:36.755953 14062 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0103 12:02:36.756212 14062 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json ...
I0103 12:02:36.756235 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json: {Name:mk0057a77f8a4872e0e4ef2d65f0a305812e68d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:36.756526 14062 cache.go:194] Successfully downloaded all kic artifacts
I0103 12:02:36.756558 14062 start.go:365] acquiring machines lock for ingress-addon-legacy-996000: {Name:mk776a6ad7fbaf0f5c5fac522d51577a218f4dfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0103 12:02:36.756656 14062 start.go:369] acquired machines lock for "ingress-addon-legacy-996000" in 87.598µs
I0103 12:02:36.756678 14062 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0103 12:02:36.756723 14062 start.go:125] createHost starting for "" (driver="docker")
I0103 12:02:36.790798 14062 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0103 12:02:36.790996 14062 start.go:159] libmachine.API.Create for "ingress-addon-legacy-996000" (driver="docker")
I0103 12:02:36.791022 14062 client.go:168] LocalClient.Create starting
I0103 12:02:36.791116 14062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem
I0103 12:02:36.791161 14062 main.go:141] libmachine: Decoding PEM data...
I0103 12:02:36.791178 14062 main.go:141] libmachine: Parsing certificate...
I0103 12:02:36.791221 14062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem
I0103 12:02:36.791255 14062 main.go:141] libmachine: Decoding PEM data...
I0103 12:02:36.791267 14062 main.go:141] libmachine: Parsing certificate...
I0103 12:02:36.811464 14062 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0103 12:02:36.864258 14062 cli_runner.go:211] docker network inspect ingress-addon-legacy-996000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0103 12:02:36.864387 14062 network_create.go:281] running [docker network inspect ingress-addon-legacy-996000] to gather additional debugging logs...
I0103 12:02:36.864407 14062 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-996000
W0103 12:02:36.914865 14062 cli_runner.go:211] docker network inspect ingress-addon-legacy-996000 returned with exit code 1
I0103 12:02:36.914893 14062 network_create.go:284] error running [docker network inspect ingress-addon-legacy-996000]: docker network inspect ingress-addon-legacy-996000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-996000 not found
I0103 12:02:36.914907 14062 network_create.go:286] output of [docker network inspect ingress-addon-legacy-996000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-996000 not found
** /stderr **
I0103 12:02:36.915039 14062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0103 12:02:36.966424 14062 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005ad4b0}
I0103 12:02:36.966462 14062 network_create.go:124] attempt to create docker network ingress-addon-legacy-996000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0103 12:02:36.966541 14062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 ingress-addon-legacy-996000
I0103 12:02:37.052462 14062 network_create.go:108] docker network ingress-addon-legacy-996000 192.168.49.0/24 created
I0103 12:02:37.052527 14062 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-996000" container
I0103 12:02:37.052658 14062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0103 12:02:37.103600 14062 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-996000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --label created_by.minikube.sigs.k8s.io=true
I0103 12:02:37.155536 14062 oci.go:103] Successfully created a docker volume ingress-addon-legacy-996000
I0103 12:02:37.155669 14062 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-996000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --entrypoint /usr/bin/test -v ingress-addon-legacy-996000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
I0103 12:02:37.519639 14062 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-996000
I0103 12:02:37.519680 14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0103 12:02:37.519693 14062 kic.go:194] Starting extracting preloaded images to volume ...
I0103 12:02:37.519814 14062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
I0103 12:02:39.873546 14062 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-996000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (2.353711583s)
I0103 12:02:39.873573 14062 kic.go:203] duration metric: took 2.353938 seconds to extract preloaded images to volume
I0103 12:02:39.873682 14062 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0103 12:02:39.974396 14062 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-996000 --name ingress-addon-legacy-996000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-996000 --network ingress-addon-legacy-996000 --ip 192.168.49.2 --volume ingress-addon-legacy-996000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
I0103 12:02:40.245678 14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Running}}
I0103 12:02:40.301294 14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
I0103 12:02:40.356518 14062 cli_runner.go:164] Run: docker exec ingress-addon-legacy-996000 stat /var/lib/dpkg/alternatives/iptables
I0103 12:02:40.508663 14062 oci.go:144] the created container "ingress-addon-legacy-996000" has a running status.
I0103 12:02:40.508707 14062 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa...
I0103 12:02:40.594462 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0103 12:02:40.594523 14062 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0103 12:02:40.662628 14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
I0103 12:02:40.717647 14062 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0103 12:02:40.717670 14062 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-996000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0103 12:02:40.825230 14062 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-996000 --format={{.State.Status}}
I0103 12:02:40.877119 14062 machine.go:88] provisioning docker machine ...
I0103 12:02:40.877180 14062 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-996000"
I0103 12:02:40.877289 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:40.929446 14062 main.go:141] libmachine: Using SSH client type: native
I0103 12:02:40.929784 14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil> [] 0s} 127.0.0.1 58372 <nil> <nil>}
I0103 12:02:40.929800 14062 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-996000 && echo "ingress-addon-legacy-996000" | sudo tee /etc/hostname
I0103 12:02:41.058475 14062 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-996000
I0103 12:02:41.058563 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:41.111183 14062 main.go:141] libmachine: Using SSH client type: native
I0103 12:02:41.111489 14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil> [] 0s} 127.0.0.1 58372 <nil> <nil>}
I0103 12:02:41.111507 14062 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-996000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-996000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-996000' | sudo tee -a /etc/hosts;
fi
fi
I0103 12:02:41.231939 14062 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0103 12:02:41.231969 14062 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17885-10646/.minikube CaCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17885-10646/.minikube}
I0103 12:02:41.231994 14062 ubuntu.go:177] setting up certificates
I0103 12:02:41.232002 14062 provision.go:83] configureAuth start
I0103 12:02:41.232070 14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
I0103 12:02:41.284227 14062 provision.go:138] copyHostCerts
I0103 12:02:41.284270 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
I0103 12:02:41.284325 14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem, removing ...
I0103 12:02:41.284332 14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem
I0103 12:02:41.284471 14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.pem (1078 bytes)
I0103 12:02:41.284661 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
I0103 12:02:41.284688 14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem, removing ...
I0103 12:02:41.284693 14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem
I0103 12:02:41.284785 14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/cert.pem (1123 bytes)
I0103 12:02:41.284931 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
I0103 12:02:41.284969 14062 exec_runner.go:144] found /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem, removing ...
I0103 12:02:41.284974 14062 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem
I0103 12:02:41.285059 14062 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17885-10646/.minikube/key.pem (1679 bytes)
I0103 12:02:41.285215 14062 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-996000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-996000]
I0103 12:02:41.531802 14062 provision.go:172] copyRemoteCerts
I0103 12:02:41.531869 14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0103 12:02:41.531940 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:41.584780 14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
I0103 12:02:41.673213 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0103 12:02:41.673285 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0103 12:02:41.693969 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem -> /etc/docker/server.pem
I0103 12:02:41.694037 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0103 12:02:41.714129 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0103 12:02:41.714217 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0103 12:02:41.734862 14062 provision.go:86] duration metric: configureAuth took 502.855937ms
I0103 12:02:41.734879 14062 ubuntu.go:193] setting minikube options for container-runtime
I0103 12:02:41.735027 14062 config.go:182] Loaded profile config "ingress-addon-legacy-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0103 12:02:41.735100 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:41.787939 14062 main.go:141] libmachine: Using SSH client type: native
I0103 12:02:41.788242 14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil> [] 0s} 127.0.0.1 58372 <nil> <nil>}
I0103 12:02:41.788259 14062 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0103 12:02:41.907743 14062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0103 12:02:41.907760 14062 ubuntu.go:71] root file system type: overlay
I0103 12:02:41.907864 14062 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0103 12:02:41.907950 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:41.960776 14062 main.go:141] libmachine: Using SSH client type: native
I0103 12:02:41.961111 14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil> [] 0s} 127.0.0.1 58372 <nil> <nil>}
I0103 12:02:41.961162 14062 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0103 12:02:42.089486 14062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0103 12:02:42.089580 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:42.144997 14062 main.go:141] libmachine: Using SSH client type: native
I0103 12:02:42.145322 14062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil> [] 0s} 127.0.0.1 58372 <nil> <nil>}
I0103 12:02:42.145337 14062 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0103 12:02:42.699552 14062 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-01-03 20:02:42.087182906 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0103 12:02:42.699577 14062 machine.go:91] provisioned docker machine in 1.822463077s
I0103 12:02:42.699584 14062 client.go:171] LocalClient.Create took 5.908706394s
I0103 12:02:42.699611 14062 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-996000" took 5.908763917s
I0103 12:02:42.699623 14062 start.go:300] post-start starting for "ingress-addon-legacy-996000" (driver="docker")
I0103 12:02:42.699632 14062 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0103 12:02:42.699698 14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0103 12:02:42.699760 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:42.751380 14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
I0103 12:02:42.839441 14062 ssh_runner.go:195] Run: cat /etc/os-release
I0103 12:02:42.843240 14062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0103 12:02:42.843268 14062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0103 12:02:42.843276 14062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0103 12:02:42.843282 14062 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0103 12:02:42.843293 14062 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/addons for local assets ...
I0103 12:02:42.843385 14062 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17885-10646/.minikube/files for local assets ...
I0103 12:02:42.843563 14062 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> 110902.pem in /etc/ssl/certs
I0103 12:02:42.843575 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> /etc/ssl/certs/110902.pem
I0103 12:02:42.843809 14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0103 12:02:42.851659 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /etc/ssl/certs/110902.pem (1708 bytes)
I0103 12:02:42.871685 14062 start.go:303] post-start completed in 172.057141ms
I0103 12:02:42.872260 14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
I0103 12:02:42.924398 14062 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/config.json ...
I0103 12:02:42.924867 14062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0103 12:02:42.924925 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:42.976187 14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
I0103 12:02:43.060503 14062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0103 12:02:43.065284 14062 start.go:128] duration metric: createHost completed in 6.308704109s
I0103 12:02:43.065304 14062 start.go:83] releasing machines lock for "ingress-addon-legacy-996000", held for 6.308799819s
I0103 12:02:43.065396 14062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-996000
I0103 12:02:43.116690 14062 ssh_runner.go:195] Run: cat /version.json
I0103 12:02:43.116720 14062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0103 12:02:43.116764 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:43.116800 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:43.170678 14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
I0103 12:02:43.170708 14062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58372 SSHKeyPath:/Users/jenkins/minikube-integration/17885-10646/.minikube/machines/ingress-addon-legacy-996000/id_rsa Username:docker}
I0103 12:02:43.363029 14062 ssh_runner.go:195] Run: systemctl --version
I0103 12:02:43.367695 14062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0103 12:02:43.372477 14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0103 12:02:43.393726 14062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0103 12:02:43.393788 14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0103 12:02:43.408636 14062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0103 12:02:43.423416 14062 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0103 12:02:43.423436 14062 start.go:475] detecting cgroup driver to use...
I0103 12:02:43.423452 14062 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0103 12:02:43.423569 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0103 12:02:43.438314 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0103 12:02:43.447849 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0103 12:02:43.456858 14062 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0103 12:02:43.456917 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0103 12:02:43.466205 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0103 12:02:43.475253 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0103 12:02:43.484350 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0103 12:02:43.493472 14062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0103 12:02:43.502037 14062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0103 12:02:43.511434 14062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0103 12:02:43.519355 14062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0103 12:02:43.527063 14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0103 12:02:43.578162 14062 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0103 12:02:43.662267 14062 start.go:475] detecting cgroup driver to use...
I0103 12:02:43.662287 14062 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0103 12:02:43.662363 14062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0103 12:02:43.686480 14062 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0103 12:02:43.686545 14062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0103 12:02:43.697596 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0103 12:02:43.713403 14062 ssh_runner.go:195] Run: which cri-dockerd
I0103 12:02:43.717834 14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0103 12:02:43.726976 14062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0103 12:02:43.744377 14062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0103 12:02:43.826492 14062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0103 12:02:43.922144 14062 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0103 12:02:43.922232 14062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0103 12:02:43.938424 14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0103 12:02:44.021042 14062 ssh_runner.go:195] Run: sudo systemctl restart docker
I0103 12:02:44.255599 14062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0103 12:02:44.278836 14062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0103 12:02:44.324595 14062 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0103 12:02:44.324724 14062 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-996000 dig +short host.docker.internal
I0103 12:02:44.448874 14062 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0103 12:02:44.448971 14062 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0103 12:02:44.453488 14062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0103 12:02:44.463686 14062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-996000
I0103 12:02:44.514893 14062 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0103 12:02:44.514967 14062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0103 12:02:44.534376 14062 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0103 12:02:44.534389 14062 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0103 12:02:44.534444 14062 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0103 12:02:44.542718 14062 ssh_runner.go:195] Run: which lz4
I0103 12:02:44.546722 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0103 12:02:44.546848 14062 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0103 12:02:44.550877 14062 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0103 12:02:44.550903 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0103 12:02:50.221037 14062 docker.go:635] Took 5.674384 seconds to copy over tarball
I0103 12:02:50.221101 14062 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0103 12:02:51.860731 14062 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.639654618s)
I0103 12:02:51.860748 14062 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0103 12:02:51.905381 14062 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0103 12:02:51.914050 14062 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0103 12:02:51.929120 14062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0103 12:02:51.980936 14062 ssh_runner.go:195] Run: sudo systemctl restart docker
I0103 12:02:52.973556 14062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0103 12:02:52.992263 14062 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0103 12:02:52.992275 14062 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0103 12:02:52.992287 14062 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0103 12:02:53.001667 14062 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0103 12:02:53.001712 14062 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0103 12:02:53.001669 14062 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0103 12:02:53.001755 14062 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0103 12:02:53.001774 14062 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0103 12:02:53.001776 14062 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0103 12:02:53.001958 14062 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0103 12:02:53.004009 14062 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0103 12:02:53.006555 14062 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0103 12:02:53.006760 14062 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0103 12:02:53.006978 14062 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0103 12:02:53.007080 14062 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0103 12:02:53.007125 14062 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0103 12:02:53.007600 14062 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0103 12:02:53.008105 14062 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0103 12:02:53.009302 14062 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0103 12:02:53.512576 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0103 12:02:53.531360 14062 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0103 12:02:53.531402 14062 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0103 12:02:53.531466 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0103 12:02:53.541015 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0103 12:02:53.541798 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0103 12:02:53.549358 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0103 12:02:53.550369 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0103 12:02:53.563757 14062 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0103 12:02:53.563791 14062 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0103 12:02:53.563926 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0103 12:02:53.565163 14062 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0103 12:02:53.565186 14062 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
I0103 12:02:53.565248 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0103 12:02:53.584868 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0103 12:02:53.584955 14062 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0103 12:02:53.584975 14062 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0103 12:02:53.585038 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0103 12:02:53.589662 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0103 12:02:53.594410 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0103 12:02:53.609010 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0103 12:02:53.610632 14062 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0103 12:02:53.610656 14062 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0103 12:02:53.610715 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0103 12:02:53.628211 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0103 12:02:53.637415 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0103 12:02:53.661254 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0103 12:02:53.678517 14062 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0103 12:02:53.678546 14062 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
I0103 12:02:53.678607 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0103 12:02:53.682905 14062 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0103 12:02:53.697212 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0103 12:02:53.702143 14062 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0103 12:02:53.702166 14062 docker.go:323] Removing image: registry.k8s.io/pause:3.2
I0103 12:02:53.702233 14062 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0103 12:02:53.719030 14062 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0103 12:02:53.719077 14062 cache_images.go:92] LoadImages completed in 726.799551ms
W0103 12:02:53.719111 14062 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17885-10646/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
I0103 12:02:53.719182 14062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0103 12:02:53.767888 14062 cni.go:84] Creating CNI manager for ""
I0103 12:02:53.767907 14062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0103 12:02:53.767919 14062 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0103 12:02:53.767939 14062 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-996000 NodeName:ingress-addon-legacy-996000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0103 12:02:53.768041 14062 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-996000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0103 12:02:53.768094 14062 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-996000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0103 12:02:53.768158 14062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0103 12:02:53.776471 14062 binaries.go:44] Found k8s binaries, skipping transfer
I0103 12:02:53.776527 14062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0103 12:02:53.784412 14062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0103 12:02:53.799563 14062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0103 12:02:53.815176 14062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0103 12:02:53.841967 14062 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0103 12:02:53.846119 14062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0103 12:02:53.856585 14062 certs.go:56] Setting up /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000 for IP: 192.168.49.2
I0103 12:02:53.856608 14062 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a30c05f18415c794a1ae2617714fd3a6ba516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:53.856787 14062 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key
I0103 12:02:53.856889 14062 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key
I0103 12:02:53.856947 14062 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key
I0103 12:02:53.856961 14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt with IP's: []
I0103 12:02:54.125255 14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt ...
I0103 12:02:54.125269 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.crt: {Name:mk3fad42c70d612449fc9d243d5b4fcc559d2f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.125587 14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key ...
I0103 12:02:54.125596 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/client.key: {Name:mk1bc7cf77add520d8f141f41b4a723ff72481f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.125819 14062 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2
I0103 12:02:54.125840 14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0103 12:02:54.214162 14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 ...
I0103 12:02:54.214174 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2: {Name:mkcacd6c5af309e011baeadc8b6a0a3fb281f3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.214454 14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2 ...
I0103 12:02:54.214463 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2: {Name:mke1995c76463b6a38c3c3214ea4cecf1304f436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.214658 14062 certs.go:337] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt
I0103 12:02:54.214834 14062 certs.go:341] copying /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key
I0103 12:02:54.215002 14062 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key
I0103 12:02:54.215016 14062 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt with IP's: []
I0103 12:02:54.585890 14062 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt ...
I0103 12:02:54.585905 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt: {Name:mk01de5b8eec29fcaf2b145d43418e4d3023c940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.586185 14062 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key ...
I0103 12:02:54.586200 14062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key: {Name:mkf84a96104b8716a9cd667aa3d9e48ed023e399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0103 12:02:54.586419 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0103 12:02:54.586451 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0103 12:02:54.586470 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0103 12:02:54.586487 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0103 12:02:54.586505 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0103 12:02:54.586530 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0103 12:02:54.586546 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0103 12:02:54.586564 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0103 12:02:54.586649 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem (1338 bytes)
W0103 12:02:54.586708 14062 certs.go:433] ignoring /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090_empty.pem, impossibly tiny 0 bytes
I0103 12:02:54.586718 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca-key.pem (1675 bytes)
I0103 12:02:54.586754 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/ca.pem (1078 bytes)
I0103 12:02:54.586782 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/cert.pem (1123 bytes)
I0103 12:02:54.586815 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/certs/key.pem (1679 bytes)
I0103 12:02:54.586876 14062 certs.go:437] found cert: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem (1708 bytes)
I0103 12:02:54.586909 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0103 12:02:54.586930 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem -> /usr/share/ca-certificates/11090.pem
I0103 12:02:54.586946 14062 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem -> /usr/share/ca-certificates/110902.pem
I0103 12:02:54.587393 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0103 12:02:54.607914 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0103 12:02:54.628089 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0103 12:02:54.648598 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/profiles/ingress-addon-legacy-996000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0103 12:02:54.668969 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0103 12:02:54.689963 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0103 12:02:54.710743 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0103 12:02:54.731103 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0103 12:02:54.750939 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0103 12:02:54.771579 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/certs/11090.pem --> /usr/share/ca-certificates/11090.pem (1338 bytes)
I0103 12:02:54.791861 14062 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17885-10646/.minikube/files/etc/ssl/certs/110902.pem --> /usr/share/ca-certificates/110902.pem (1708 bytes)
I0103 12:02:54.812207 14062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0103 12:02:54.827571 14062 ssh_runner.go:195] Run: openssl version
I0103 12:02:54.832969 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110902.pem && ln -fs /usr/share/ca-certificates/110902.pem /etc/ssl/certs/110902.pem"
I0103 12:02:54.841908 14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110902.pem
I0103 12:02:54.845994 14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 3 19:57 /usr/share/ca-certificates/110902.pem
I0103 12:02:54.846041 14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110902.pem
I0103 12:02:54.852372 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110902.pem /etc/ssl/certs/3ec20f2e.0"
I0103 12:02:54.861481 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0103 12:02:54.870395 14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0103 12:02:54.874637 14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 3 19:52 /usr/share/ca-certificates/minikubeCA.pem
I0103 12:02:54.874685 14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0103 12:02:54.881350 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0103 12:02:54.890315 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11090.pem && ln -fs /usr/share/ca-certificates/11090.pem /etc/ssl/certs/11090.pem"
I0103 12:02:54.899028 14062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11090.pem
I0103 12:02:54.903032 14062 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 3 19:57 /usr/share/ca-certificates/11090.pem
I0103 12:02:54.903080 14062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11090.pem
I0103 12:02:54.909580 14062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11090.pem /etc/ssl/certs/51391683.0"
I0103 12:02:54.918423 14062 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0103 12:02:54.922417 14062 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0103 12:02:54.922466 14062 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-996000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0103 12:02:54.922561 14062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0103 12:02:54.939486 14062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0103 12:02:54.947731 14062 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0103 12:02:54.955815 14062 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0103 12:02:54.955878 14062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0103 12:02:54.963864 14062 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0103 12:02:54.963891 14062 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0103 12:02:55.009324 14062 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0103 12:02:55.009376 14062 kubeadm.go:322] [preflight] Running pre-flight checks
I0103 12:02:55.232707 14062 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0103 12:02:55.232799 14062 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0103 12:02:55.232915 14062 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0103 12:02:55.400893 14062 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0103 12:02:55.401655 14062 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0103 12:02:55.401694 14062 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0103 12:02:55.477190 14062 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0103 12:02:55.498754 14062 out.go:204] - Generating certificates and keys ...
I0103 12:02:55.498846 14062 kubeadm.go:322] [certs] Using existing ca certificate authority
I0103 12:02:55.498917 14062 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0103 12:02:55.706912 14062 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0103 12:02:55.782427 14062 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0103 12:02:55.916209 14062 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0103 12:02:56.016246 14062 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0103 12:02:56.263324 14062 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0103 12:02:56.263443 14062 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0103 12:02:56.457698 14062 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0103 12:02:56.457817 14062 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0103 12:02:56.507725 14062 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0103 12:02:56.597866 14062 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0103 12:02:56.785701 14062 kubeadm.go:322] [certs] Generating "sa" key and public key
I0103 12:02:56.785805 14062 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0103 12:02:56.856390 14062 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0103 12:02:57.065633 14062 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0103 12:02:57.374571 14062 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0103 12:02:57.598758 14062 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0103 12:02:57.599626 14062 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0103 12:02:57.621123 14062 out.go:204] - Booting up control plane ...
I0103 12:02:57.621239 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0103 12:02:57.621325 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0103 12:02:57.621416 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0103 12:02:57.621510 14062 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0103 12:02:57.621695 14062 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0103 12:03:37.607377 14062 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0103 12:03:37.607913 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:03:37.608138 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:03:42.609092 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:03:42.609310 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:03:52.609235 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:03:52.609401 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:04:12.610413 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:04:12.610634 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:04:52.611468 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:04:52.611778 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:04:52.611796 14062 kubeadm.go:322]
I0103 12:04:52.611885 14062 kubeadm.go:322] Unfortunately, an error has occurred:
I0103 12:04:52.611982 14062 kubeadm.go:322] timed out waiting for the condition
I0103 12:04:52.611997 14062 kubeadm.go:322]
I0103 12:04:52.612056 14062 kubeadm.go:322] This error is likely caused by:
I0103 12:04:52.612114 14062 kubeadm.go:322] - The kubelet is not running
I0103 12:04:52.612236 14062 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0103 12:04:52.612250 14062 kubeadm.go:322]
I0103 12:04:52.612363 14062 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0103 12:04:52.612397 14062 kubeadm.go:322] - 'systemctl status kubelet'
I0103 12:04:52.612433 14062 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0103 12:04:52.612439 14062 kubeadm.go:322]
I0103 12:04:52.612577 14062 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0103 12:04:52.612678 14062 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0103 12:04:52.612685 14062 kubeadm.go:322]
I0103 12:04:52.612785 14062 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0103 12:04:52.612914 14062 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0103 12:04:52.612983 14062 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0103 12:04:52.613011 14062 kubeadm.go:322] - 'docker logs CONTAINERID'
I0103 12:04:52.613017 14062 kubeadm.go:322]
I0103 12:04:52.614251 14062 kubeadm.go:322] W0103 20:02:55.009106 1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0103 12:04:52.614399 14062 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0103 12:04:52.614469 14062 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0103 12:04:52.614582 14062 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0103 12:04:52.614672 14062 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0103 12:04:52.614772 14062 kubeadm.go:322] W0103 20:02:57.604146 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0103 12:04:52.614904 14062 kubeadm.go:322] W0103 20:02:57.604917 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0103 12:04:52.614978 14062 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0103 12:04:52.615047 14062 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0103 12:04:52.615146 14062 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:02:55.009106 1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:02:57.604146 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:02:57.604917 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-996000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:02:55.009106 1700 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:02:57.604146 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:02:57.604917 1700 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0103 12:04:52.615181 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0103 12:04:53.030723 14062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0103 12:04:53.042912 14062 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0103 12:04:53.042984 14062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0103 12:04:53.051964 14062 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0103 12:04:53.051996 14062 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0103 12:04:53.098760 14062 kubeadm.go:322] W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0103 12:04:53.203831 14062 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0103 12:04:53.203940 14062 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0103 12:04:53.254294 14062 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0103 12:04:53.331921 14062 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0103 12:04:54.261401 14062 kubeadm.go:322] W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0103 12:04:54.262116 14062 kubeadm.go:322] W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0103 12:06:49.269158 14062 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0103 12:06:49.269226 14062 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0103 12:06:49.271588 14062 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0103 12:06:49.271639 14062 kubeadm.go:322] [preflight] Running pre-flight checks
I0103 12:06:49.271711 14062 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0103 12:06:49.271781 14062 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0103 12:06:49.271844 14062 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0103 12:06:49.271959 14062 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0103 12:06:49.272077 14062 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0103 12:06:49.272109 14062 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0103 12:06:49.272152 14062 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0103 12:06:49.293296 14062 out.go:204] - Generating certificates and keys ...
I0103 12:06:49.293381 14062 kubeadm.go:322] [certs] Using existing ca certificate authority
I0103 12:06:49.293436 14062 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0103 12:06:49.293537 14062 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0103 12:06:49.293589 14062 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0103 12:06:49.293636 14062 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0103 12:06:49.293676 14062 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0103 12:06:49.293744 14062 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0103 12:06:49.293822 14062 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0103 12:06:49.293881 14062 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0103 12:06:49.293942 14062 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0103 12:06:49.294008 14062 kubeadm.go:322] [certs] Using the existing "sa" key
I0103 12:06:49.294086 14062 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0103 12:06:49.294128 14062 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0103 12:06:49.294168 14062 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0103 12:06:49.294225 14062 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0103 12:06:49.294275 14062 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0103 12:06:49.294329 14062 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0103 12:06:49.315382 14062 out.go:204] - Booting up control plane ...
I0103 12:06:49.315525 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0103 12:06:49.315662 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0103 12:06:49.315802 14062 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0103 12:06:49.315956 14062 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0103 12:06:49.316233 14062 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0103 12:06:49.316312 14062 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0103 12:06:49.316417 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:06:49.316701 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:06:49.316802 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:06:49.317070 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:06:49.317180 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:06:49.317379 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:06:49.317461 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:06:49.317672 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:06:49.317753 14062 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0103 12:06:49.317948 14062 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0103 12:06:49.317963 14062 kubeadm.go:322]
I0103 12:06:49.318003 14062 kubeadm.go:322] Unfortunately, an error has occurred:
I0103 12:06:49.318049 14062 kubeadm.go:322] timed out waiting for the condition
I0103 12:06:49.318059 14062 kubeadm.go:322]
I0103 12:06:49.318099 14062 kubeadm.go:322] This error is likely caused by:
I0103 12:06:49.318138 14062 kubeadm.go:322] - The kubelet is not running
I0103 12:06:49.318253 14062 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0103 12:06:49.318264 14062 kubeadm.go:322]
I0103 12:06:49.318378 14062 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0103 12:06:49.318415 14062 kubeadm.go:322] - 'systemctl status kubelet'
I0103 12:06:49.318446 14062 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0103 12:06:49.318454 14062 kubeadm.go:322]
I0103 12:06:49.318561 14062 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0103 12:06:49.318653 14062 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0103 12:06:49.318668 14062 kubeadm.go:322]
I0103 12:06:49.318768 14062 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0103 12:06:49.318829 14062 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0103 12:06:49.318917 14062 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0103 12:06:49.318954 14062 kubeadm.go:322] - 'docker logs CONTAINERID'
I0103 12:06:49.318970 14062 kubeadm.go:322]
I0103 12:06:49.319007 14062 kubeadm.go:406] StartCluster complete in 3m54.402477068s
I0103 12:06:49.319112 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0103 12:06:49.336809 14062 logs.go:284] 0 containers: []
W0103 12:06:49.336823 14062 logs.go:286] No container was found matching "kube-apiserver"
I0103 12:06:49.336897 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0103 12:06:49.354415 14062 logs.go:284] 0 containers: []
W0103 12:06:49.354427 14062 logs.go:286] No container was found matching "etcd"
I0103 12:06:49.354498 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0103 12:06:49.372526 14062 logs.go:284] 0 containers: []
W0103 12:06:49.372544 14062 logs.go:286] No container was found matching "coredns"
I0103 12:06:49.372610 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0103 12:06:49.389656 14062 logs.go:284] 0 containers: []
W0103 12:06:49.389670 14062 logs.go:286] No container was found matching "kube-scheduler"
I0103 12:06:49.389756 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0103 12:06:49.407719 14062 logs.go:284] 0 containers: []
W0103 12:06:49.407733 14062 logs.go:286] No container was found matching "kube-proxy"
I0103 12:06:49.407801 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0103 12:06:49.426164 14062 logs.go:284] 0 containers: []
W0103 12:06:49.426178 14062 logs.go:286] No container was found matching "kube-controller-manager"
I0103 12:06:49.426254 14062 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0103 12:06:49.443234 14062 logs.go:284] 0 containers: []
W0103 12:06:49.443248 14062 logs.go:286] No container was found matching "kindnet"
I0103 12:06:49.443261 14062 logs.go:123] Gathering logs for kubelet ...
I0103 12:06:49.443274 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0103 12:06:49.478428 14062 logs.go:123] Gathering logs for dmesg ...
I0103 12:06:49.478444 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0103 12:06:49.490406 14062 logs.go:123] Gathering logs for describe nodes ...
I0103 12:06:49.490420 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0103 12:06:49.553716 14062 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0103 12:06:49.553729 14062 logs.go:123] Gathering logs for Docker ...
I0103 12:06:49.553737 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0103 12:06:49.568423 14062 logs.go:123] Gathering logs for container status ...
I0103 12:06:49.568437 14062 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0103 12:06:49.616492 14062 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0103 12:06:49.616517 14062 out.go:239] *
*
W0103 12:06:49.616558 14062 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0103 12:06:49.616574 14062 out.go:239] *
*
W0103 12:06:49.617201 14062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0103 12:06:49.679537 14062 out.go:177]
W0103 12:06:49.721553 14062 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0103 20:04:53.098531 4712 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0103 20:04:54.261367 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0103 20:04:54.262066 4712 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0103 12:06:49.721606 14062 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0103 12:06:49.721638 14062 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0103 12:06:49.763383 14062 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-996000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.12s)