=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0213 15:09:17.230790 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:09:44.920960 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/addons-441000/client.crt: no such file or directory
E0213 15:10:14.207769 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.214287 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.225711 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.247634 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.289876 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.370593 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.530775 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:14.851685 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:15.493944 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:16.775935 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:19.336850 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:24.457144 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:34.698785 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:10:55.179406 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:11:36.141189 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
E0213 15:12:58.060956 6776 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/functional-443000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.264696523s)
-- stdout --
* [ingress-addon-legacy-181000] minikube v1.32.0 on Darwin 14.3.1
- MINIKUBE_LOCATION=18169
- KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-181000 in cluster ingress-addon-legacy-181000
* Pulling base image v0.0.42-1704759386-17866 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0213 15:09:03.594107 9652 out.go:291] Setting OutFile to fd 1 ...
I0213 15:09:03.594358 9652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:09:03.594363 9652 out.go:304] Setting ErrFile to fd 2...
I0213 15:09:03.594367 9652 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:09:03.594539 9652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-6320/.minikube/bin
I0213 15:09:03.596040 9652 out.go:298] Setting JSON to false
I0213 15:09:03.618659 9652 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2603,"bootTime":1707863140,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0213 15:09:03.618764 9652 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0213 15:09:03.640548 9652 out.go:177] * [ingress-addon-legacy-181000] minikube v1.32.0 on Darwin 14.3.1
I0213 15:09:03.683455 9652 out.go:177] - MINIKUBE_LOCATION=18169
I0213 15:09:03.683537 9652 notify.go:220] Checking for updates...
I0213 15:09:03.705468 9652 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18169-6320/kubeconfig
I0213 15:09:03.727293 9652 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0213 15:09:03.749378 9652 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0213 15:09:03.771227 9652 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-6320/.minikube
I0213 15:09:03.792218 9652 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0213 15:09:03.814615 9652 driver.go:392] Setting default libvirt URI to qemu:///system
I0213 15:09:03.870626 9652 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
I0213 15:09:03.870800 9652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0213 15:09:03.973520 9652 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:09:03.964251246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0213 15:09:03.994573 9652 out.go:177] * Using the docker driver based on user configuration
I0213 15:09:04.036662 9652 start.go:298] selected driver: docker
I0213 15:09:04.036690 9652 start.go:902] validating driver "docker" against <nil>
I0213 15:09:04.036704 9652 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0213 15:09:04.040402 9652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0213 15:09:04.146908 9652 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-13 23:09:04.13775674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0213 15:09:04.147092 9652 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I0213 15:09:04.147293 9652 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0213 15:09:04.168907 9652 out.go:177] * Using Docker Desktop driver with root privileges
I0213 15:09:04.191700 9652 cni.go:84] Creating CNI manager for ""
I0213 15:09:04.191729 9652 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0213 15:09:04.191744 9652 start_flags.go:321] config:
{Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 15:09:04.213787 9652 out.go:177] * Starting control plane node ingress-addon-legacy-181000 in cluster ingress-addon-legacy-181000
I0213 15:09:04.256976 9652 cache.go:121] Beginning downloading kic base image for docker with docker
I0213 15:09:04.278907 9652 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
I0213 15:09:04.320817 9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 15:09:04.320914 9652 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
I0213 15:09:04.371203 9652 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
I0213 15:09:04.371224 9652 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
I0213 15:09:04.603433 9652 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0213 15:09:04.603462 9652 cache.go:56] Caching tarball of preloaded images
I0213 15:09:04.603842 9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 15:09:04.625718 9652 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0213 15:09:04.647192 9652 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 15:09:05.189528 9652 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0213 15:09:22.641961 9652 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 15:09:22.642149 9652 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 15:09:23.274186 9652 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0213 15:09:23.274437 9652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json ...
I0213 15:09:23.274463 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json: {Name:mkfb5116497b5ef5e775e10a45eb25bdca5f4bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:23.274756 9652 cache.go:194] Successfully downloaded all kic artifacts
I0213 15:09:23.274787 9652 start.go:365] acquiring machines lock for ingress-addon-legacy-181000: {Name:mk7bdde0987fe3a73821b7b521ea63475abe23f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0213 15:09:23.274879 9652 start.go:369] acquired machines lock for "ingress-addon-legacy-181000" in 84.602µs
I0213 15:09:23.274899 9652 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0213 15:09:23.274953 9652 start.go:125] createHost starting for "" (driver="docker")
I0213 15:09:23.301242 9652 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0213 15:09:23.301602 9652 start.go:159] libmachine.API.Create for "ingress-addon-legacy-181000" (driver="docker")
I0213 15:09:23.301648 9652 client.go:168] LocalClient.Create starting
I0213 15:09:23.301847 9652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem
I0213 15:09:23.301940 9652 main.go:141] libmachine: Decoding PEM data...
I0213 15:09:23.301976 9652 main.go:141] libmachine: Parsing certificate...
I0213 15:09:23.302057 9652 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem
I0213 15:09:23.302127 9652 main.go:141] libmachine: Decoding PEM data...
I0213 15:09:23.302144 9652 main.go:141] libmachine: Parsing certificate...
I0213 15:09:23.322678 9652 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-181000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0213 15:09:23.375110 9652 cli_runner.go:211] docker network inspect ingress-addon-legacy-181000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0213 15:09:23.375238 9652 network_create.go:281] running [docker network inspect ingress-addon-legacy-181000] to gather additional debugging logs...
I0213 15:09:23.375259 9652 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-181000
W0213 15:09:23.425810 9652 cli_runner.go:211] docker network inspect ingress-addon-legacy-181000 returned with exit code 1
I0213 15:09:23.425853 9652 network_create.go:284] error running [docker network inspect ingress-addon-legacy-181000]: docker network inspect ingress-addon-legacy-181000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-181000 not found
I0213 15:09:23.425869 9652 network_create.go:286] output of [docker network inspect ingress-addon-legacy-181000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-181000 not found
** /stderr **
I0213 15:09:23.426031 9652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0213 15:09:23.476436 9652 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004b9810}
I0213 15:09:23.476472 9652 network_create.go:124] attempt to create docker network ingress-addon-legacy-181000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0213 15:09:23.476539 9652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 ingress-addon-legacy-181000
I0213 15:09:23.563711 9652 network_create.go:108] docker network ingress-addon-legacy-181000 192.168.49.0/24 created
I0213 15:09:23.563755 9652 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-181000" container
I0213 15:09:23.563882 9652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0213 15:09:23.625108 9652 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-181000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --label created_by.minikube.sigs.k8s.io=true
I0213 15:09:23.676626 9652 oci.go:103] Successfully created a docker volume ingress-addon-legacy-181000
I0213 15:09:23.676756 9652 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-181000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --entrypoint /usr/bin/test -v ingress-addon-legacy-181000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
I0213 15:09:24.063790 9652 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-181000
I0213 15:09:24.063833 9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 15:09:24.063848 9652 kic.go:194] Starting extracting preloaded images to volume ...
I0213 15:09:24.063963 9652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-181000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
I0213 15:09:26.401244 9652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-181000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.337239049s)
I0213 15:09:26.401272 9652 kic.go:203] duration metric: took 2.337453 seconds to extract preloaded images to volume
I0213 15:09:26.401396 9652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0213 15:09:26.504609 9652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-181000 --name ingress-addon-legacy-181000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-181000 --network ingress-addon-legacy-181000 --ip 192.168.49.2 --volume ingress-addon-legacy-181000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
I0213 15:09:26.762006 9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Running}}
I0213 15:09:26.817017 9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
I0213 15:09:26.875752 9652 cli_runner.go:164] Run: docker exec ingress-addon-legacy-181000 stat /var/lib/dpkg/alternatives/iptables
I0213 15:09:27.050434 9652 oci.go:144] the created container "ingress-addon-legacy-181000" has a running status.
I0213 15:09:27.050518 9652 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa...
I0213 15:09:27.212222 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0213 15:09:27.212293 9652 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0213 15:09:27.276816 9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
I0213 15:09:27.332054 9652 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0213 15:09:27.332077 9652 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-181000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0213 15:09:27.433181 9652 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-181000 --format={{.State.Status}}
I0213 15:09:27.484108 9652 machine.go:88] provisioning docker machine ...
I0213 15:09:27.484166 9652 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-181000"
I0213 15:09:27.484262 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:27.536381 9652 main.go:141] libmachine: Using SSH client type: native
I0213 15:09:27.536720 9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53249 <nil> <nil>}
I0213 15:09:27.536736 9652 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-181000 && echo "ingress-addon-legacy-181000" | sudo tee /etc/hostname
I0213 15:09:27.702473 9652 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-181000
I0213 15:09:27.702572 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:27.755020 9652 main.go:141] libmachine: Using SSH client type: native
I0213 15:09:27.755330 9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53249 <nil> <nil>}
I0213 15:09:27.755346 9652 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-181000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-181000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-181000' | sudo tee -a /etc/hosts;
fi
fi
I0213 15:09:27.893812 9652 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0213 15:09:27.893830 9652 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-6320/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-6320/.minikube}
I0213 15:09:27.893852 9652 ubuntu.go:177] setting up certificates
I0213 15:09:27.893858 9652 provision.go:83] configureAuth start
I0213 15:09:27.893929 9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
I0213 15:09:27.945748 9652 provision.go:138] copyHostCerts
I0213 15:09:27.945801 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
I0213 15:09:27.945859 9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem, removing ...
I0213 15:09:27.945868 9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem
I0213 15:09:27.945977 9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.pem (1078 bytes)
I0213 15:09:27.946156 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
I0213 15:09:27.946183 9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem, removing ...
I0213 15:09:27.946187 9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem
I0213 15:09:27.946304 9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/cert.pem (1123 bytes)
I0213 15:09:27.946475 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
I0213 15:09:27.946518 9652 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem, removing ...
I0213 15:09:27.946523 9652 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem
I0213 15:09:27.946602 9652 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-6320/.minikube/key.pem (1675 bytes)
I0213 15:09:27.946757 9652 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-181000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-181000]
I0213 15:09:28.118046 9652 provision.go:172] copyRemoteCerts
I0213 15:09:28.118100 9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0213 15:09:28.118161 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:28.169302 9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
I0213 15:09:28.271875 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem -> /etc/docker/server.pem
I0213 15:09:28.271955 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0213 15:09:28.311039 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0213 15:09:28.311108 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0213 15:09:28.350632 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0213 15:09:28.350715 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0213 15:09:28.390750 9652 provision.go:86] duration metric: configureAuth took 496.841679ms
I0213 15:09:28.390764 9652 ubuntu.go:193] setting minikube options for container-runtime
I0213 15:09:28.390972 9652 config.go:182] Loaded profile config "ingress-addon-legacy-181000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0213 15:09:28.391069 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:28.443733 9652 main.go:141] libmachine: Using SSH client type: native
I0213 15:09:28.444024 9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53249 <nil> <nil>}
I0213 15:09:28.444041 9652 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0213 15:09:28.581127 9652 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0213 15:09:28.581145 9652 ubuntu.go:71] root file system type: overlay
I0213 15:09:28.581228 9652 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0213 15:09:28.581309 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:28.631890 9652 main.go:141] libmachine: Using SSH client type: native
I0213 15:09:28.632180 9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53249 <nil> <nil>}
I0213 15:09:28.632227 9652 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0213 15:09:28.794951 9652 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0213 15:09:28.795050 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:28.847257 9652 main.go:141] libmachine: Using SSH client type: native
I0213 15:09:28.847564 9652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53249 <nil> <nil>}
I0213 15:09:28.847577 9652 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0213 15:09:29.473047 9652 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-13 23:09:28.789911782 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0213 15:09:29.473071 9652 machine.go:91] provisioned docker machine in 1.988955159s
I0213 15:09:29.473077 9652 client.go:171] LocalClient.Create took 6.171495952s
I0213 15:09:29.473099 9652 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-181000" took 6.171574084s
I0213 15:09:29.473107 9652 start.go:300] post-start starting for "ingress-addon-legacy-181000" (driver="docker")
I0213 15:09:29.473115 9652 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0213 15:09:29.473178 9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0213 15:09:29.473239 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:29.525043 9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
I0213 15:09:29.629655 9652 ssh_runner.go:195] Run: cat /etc/os-release
I0213 15:09:29.633582 9652 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0213 15:09:29.633605 9652 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0213 15:09:29.633612 9652 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0213 15:09:29.633618 9652 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0213 15:09:29.633628 9652 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/addons for local assets ...
I0213 15:09:29.633729 9652 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-6320/.minikube/files for local assets ...
I0213 15:09:29.633915 9652 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> 67762.pem in /etc/ssl/certs
I0213 15:09:29.633921 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> /etc/ssl/certs/67762.pem
I0213 15:09:29.634122 9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0213 15:09:29.648529 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /etc/ssl/certs/67762.pem (1708 bytes)
I0213 15:09:29.687873 9652 start.go:303] post-start completed in 214.759349ms
I0213 15:09:29.688761 9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
I0213 15:09:29.741235 9652 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/config.json ...
I0213 15:09:29.741685 9652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0213 15:09:29.741762 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:29.793189 9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
I0213 15:09:29.886756 9652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0213 15:09:29.891474 9652 start.go:128] duration metric: createHost completed in 6.616585669s
I0213 15:09:29.891494 9652 start.go:83] releasing machines lock for "ingress-addon-legacy-181000", held for 6.616685649s
I0213 15:09:29.891586 9652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-181000
I0213 15:09:29.942335 9652 ssh_runner.go:195] Run: cat /version.json
I0213 15:09:29.942361 9652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0213 15:09:29.942412 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:29.942435 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:29.999289 9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
I0213 15:09:29.999343 9652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53249 SSHKeyPath:/Users/jenkins/minikube-integration/18169-6320/.minikube/machines/ingress-addon-legacy-181000/id_rsa Username:docker}
I0213 15:09:30.200135 9652 ssh_runner.go:195] Run: systemctl --version
I0213 15:09:30.204802 9652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0213 15:09:30.209910 9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0213 15:09:30.251292 9652 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0213 15:09:30.251378 9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0213 15:09:30.279163 9652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0213 15:09:30.307994 9652 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0213 15:09:30.308010 9652 start.go:475] detecting cgroup driver to use...
I0213 15:09:30.308022 9652 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0213 15:09:30.308122 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0213 15:09:30.336005 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0213 15:09:30.352301 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0213 15:09:30.368855 9652 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0213 15:09:30.368937 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0213 15:09:30.385399 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 15:09:30.402042 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0213 15:09:30.418751 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 15:09:30.434210 9652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0213 15:09:30.449653 9652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0213 15:09:30.465762 9652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0213 15:09:30.480796 9652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0213 15:09:30.496072 9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:09:30.559159 9652 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0213 15:09:30.648069 9652 start.go:475] detecting cgroup driver to use...
I0213 15:09:30.648103 9652 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0213 15:09:30.648214 9652 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0213 15:09:30.667414 9652 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0213 15:09:30.667484 9652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0213 15:09:30.686980 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0213 15:09:30.718409 9652 ssh_runner.go:195] Run: which cri-dockerd
I0213 15:09:30.722695 9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0213 15:09:30.738140 9652 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0213 15:09:30.767910 9652 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0213 15:09:30.855187 9652 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0213 15:09:30.920942 9652 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0213 15:09:30.921088 9652 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0213 15:09:30.951648 9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:09:31.014921 9652 ssh_runner.go:195] Run: sudo systemctl restart docker
I0213 15:09:31.254985 9652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 15:09:31.278515 9652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 15:09:31.347047 9652 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0213 15:09:31.347178 9652 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-181000 dig +short host.docker.internal
I0213 15:09:31.469345 9652 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0213 15:09:31.469492 9652 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0213 15:09:31.474024 9652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 15:09:31.491134 9652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-181000
I0213 15:09:31.544727 9652 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 15:09:31.544818 9652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 15:09:31.563087 9652 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0213 15:09:31.563101 9652 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0213 15:09:31.563158 9652 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0213 15:09:31.578017 9652 ssh_runner.go:195] Run: which lz4
I0213 15:09:31.582436 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0213 15:09:31.582545 9652 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0213 15:09:31.586858 9652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0213 15:09:31.586885 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0213 15:09:38.376589 9652 docker.go:649] Took 6.794167 seconds to copy over tarball
I0213 15:09:38.376721 9652 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0213 15:09:40.080819 9652 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.70403556s)
I0213 15:09:40.080853 9652 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0213 15:09:40.137763 9652 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0213 15:09:40.153476 9652 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0213 15:09:40.182812 9652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:09:40.248363 9652 ssh_runner.go:195] Run: sudo systemctl restart docker
I0213 15:09:41.294707 9652 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.046321694s)
I0213 15:09:41.294872 9652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 15:09:41.313750 9652 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0213 15:09:41.313765 9652 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0213 15:09:41.313777 9652 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0213 15:09:41.319602 9652 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 15:09:41.319636 9652 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0213 15:09:41.319628 9652 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 15:09:41.320012 9652 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0213 15:09:41.320094 9652 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0213 15:09:41.320319 9652 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0213 15:09:41.320528 9652 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0213 15:09:41.320558 9652 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 15:09:41.324669 9652 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0213 15:09:41.325083 9652 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 15:09:41.326510 9652 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0213 15:09:41.326590 9652 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 15:09:41.326773 9652 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0213 15:09:41.326770 9652 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0213 15:09:41.326869 9652 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0213 15:09:41.327037 9652 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 15:09:43.245367 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0213 15:09:43.265928 9652 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0213 15:09:43.265968 9652 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 15:09:43.266039 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0213 15:09:43.284110 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0213 15:09:43.289184 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0213 15:09:43.307733 9652 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0213 15:09:43.307765 9652 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 15:09:43.307826 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0213 15:09:43.326100 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0213 15:09:43.332383 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0213 15:09:43.350470 9652 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0213 15:09:43.350496 9652 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0213 15:09:43.350552 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0213 15:09:43.364805 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0213 15:09:43.368402 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0213 15:09:43.375707 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0213 15:09:43.376292 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0213 15:09:43.385279 9652 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0213 15:09:43.385305 9652 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0213 15:09:43.385382 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0213 15:09:43.385668 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0213 15:09:43.400033 9652 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0213 15:09:43.400063 9652 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 15:09:43.400128 9652 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0213 15:09:43.400150 9652 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0213 15:09:43.400165 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0213 15:09:43.400220 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0213 15:09:43.410430 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0213 15:09:43.410784 9652 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0213 15:09:43.410811 9652 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0213 15:09:43.410867 9652 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0213 15:09:43.443755 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0213 15:09:43.444163 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0213 15:09:43.452418 9652 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0213 15:09:43.863590 9652 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0213 15:09:43.883075 9652 cache_images.go:92] LoadImages completed in 2.569314804s
W0213 15:09:43.883125 9652 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18169-6320/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
I0213 15:09:43.883207 9652 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0213 15:09:43.930258 9652 cni.go:84] Creating CNI manager for ""
I0213 15:09:43.930280 9652 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0213 15:09:43.930297 9652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0213 15:09:43.930316 9652 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-181000 NodeName:ingress-addon-legacy-181000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0213 15:09:43.930443 9652 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-181000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0213 15:09:43.930519 9652 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-181000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0213 15:09:43.930581 9652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0213 15:09:43.945828 9652 binaries.go:44] Found k8s binaries, skipping transfer
I0213 15:09:43.945930 9652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0213 15:09:43.961559 9652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0213 15:09:43.990307 9652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0213 15:09:44.019355 9652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0213 15:09:44.049742 9652 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0213 15:09:44.054378 9652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 15:09:44.071236 9652 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000 for IP: 192.168.49.2
I0213 15:09:44.071257 9652 certs.go:190] acquiring lock for shared ca certs: {Name:mkc037f48c69539d66bd92ede4890b05c28518b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.071429 9652 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key
I0213 15:09:44.071504 9652 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key
I0213 15:09:44.071553 9652 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key
I0213 15:09:44.071569 9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt with IP's: []
I0213 15:09:44.271303 9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt ...
I0213 15:09:44.271317 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.crt: {Name:mkb1064f16bfde5f75907db94e49fd65a44aa1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.271634 9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key ...
I0213 15:09:44.271643 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/client.key: {Name:mkffe42e448aba377178de2e6d44d591e5c6694c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.271862 9652 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2
I0213 15:09:44.271883 9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0213 15:09:44.390905 9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 ...
I0213 15:09:44.390916 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2: {Name:mkea8110efdc719375bd451115da36144123d377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.391180 9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2 ...
I0213 15:09:44.391192 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2: {Name:mkf19426e626e1a69599caf73770b2e8e490c01d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.391394 9652 certs.go:337] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt
I0213 15:09:44.391561 9652 certs.go:341] copying /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key
I0213 15:09:44.391728 9652 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key
I0213 15:09:44.391741 9652 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt with IP's: []
I0213 15:09:44.443736 9652 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt ...
I0213 15:09:44.443746 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt: {Name:mk535a8c35161968c1bcf86ff771ade1a2f92e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.443989 9652 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key ...
I0213 15:09:44.444002 9652 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key: {Name:mk310d5595f841f8bcf734e06d61de18e94b68ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:09:44.444193 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0213 15:09:44.444222 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0213 15:09:44.444240 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0213 15:09:44.444262 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0213 15:09:44.444282 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0213 15:09:44.444299 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0213 15:09:44.444316 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0213 15:09:44.444332 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0213 15:09:44.444416 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem (1338 bytes)
W0213 15:09:44.444461 9652 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776_empty.pem, impossibly tiny 0 bytes
I0213 15:09:44.444492 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca-key.pem (1675 bytes)
I0213 15:09:44.444533 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/ca.pem (1078 bytes)
I0213 15:09:44.444567 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/cert.pem (1123 bytes)
I0213 15:09:44.444600 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/certs/key.pem (1675 bytes)
I0213 15:09:44.444675 9652 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem (1708 bytes)
I0213 15:09:44.444712 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0213 15:09:44.444733 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem -> /usr/share/ca-certificates/6776.pem
I0213 15:09:44.444757 9652 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem -> /usr/share/ca-certificates/67762.pem
I0213 15:09:44.445275 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0213 15:09:44.486936 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0213 15:09:44.527814 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0213 15:09:44.568540 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/profiles/ingress-addon-legacy-181000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0213 15:09:44.608976 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0213 15:09:44.649834 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0213 15:09:44.690497 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0213 15:09:44.732137 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0213 15:09:44.772902 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0213 15:09:44.813586 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/certs/6776.pem --> /usr/share/ca-certificates/6776.pem (1338 bytes)
I0213 15:09:44.854022 9652 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-6320/.minikube/files/etc/ssl/certs/67762.pem --> /usr/share/ca-certificates/67762.pem (1708 bytes)
I0213 15:09:44.893781 9652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0213 15:09:44.922662 9652 ssh_runner.go:195] Run: openssl version
I0213 15:09:44.928700 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0213 15:09:44.944476 9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0213 15:09:44.948773 9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:54 /usr/share/ca-certificates/minikubeCA.pem
I0213 15:09:44.948818 9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0213 15:09:44.955327 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0213 15:09:44.970803 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6776.pem && ln -fs /usr/share/ca-certificates/6776.pem /etc/ssl/certs/6776.pem"
I0213 15:09:44.986741 9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6776.pem
I0213 15:09:44.991382 9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 23:02 /usr/share/ca-certificates/6776.pem
I0213 15:09:44.991462 9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6776.pem
I0213 15:09:44.998301 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6776.pem /etc/ssl/certs/51391683.0"
I0213 15:09:45.014793 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67762.pem && ln -fs /usr/share/ca-certificates/67762.pem /etc/ssl/certs/67762.pem"
I0213 15:09:45.031463 9652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67762.pem
I0213 15:09:45.035966 9652 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 23:02 /usr/share/ca-certificates/67762.pem
I0213 15:09:45.036018 9652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67762.pem
I0213 15:09:45.042714 9652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67762.pem /etc/ssl/certs/3ec20f2e.0"
I0213 15:09:45.058336 9652 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0213 15:09:45.062598 9652 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0213 15:09:45.062644 9652 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-181000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-181000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 15:09:45.062743 9652 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 15:09:45.081562 9652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0213 15:09:45.096841 9652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0213 15:09:45.111773 9652 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0213 15:09:45.111836 9652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 15:09:45.127639 9652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 15:09:45.127683 9652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 15:09:45.179099 9652 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0213 15:09:45.179154 9652 kubeadm.go:322] [preflight] Running pre-flight checks
I0213 15:09:45.453725 9652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0213 15:09:45.453894 9652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0213 15:09:45.454004 9652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0213 15:09:45.619405 9652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0213 15:09:45.619894 9652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0213 15:09:45.619941 9652 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0213 15:09:45.697064 9652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0213 15:09:45.743263 9652 out.go:204] - Generating certificates and keys ...
I0213 15:09:45.743330 9652 kubeadm.go:322] [certs] Using existing ca certificate authority
I0213 15:09:45.743387 9652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0213 15:09:46.016890 9652 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0213 15:09:46.135012 9652 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0213 15:09:46.240726 9652 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0213 15:09:46.483591 9652 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0213 15:09:46.642561 9652 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0213 15:09:46.642742 9652 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0213 15:09:46.816467 9652 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0213 15:09:46.816693 9652 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0213 15:09:46.989837 9652 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0213 15:09:47.071238 9652 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0213 15:09:47.260438 9652 kubeadm.go:322] [certs] Generating "sa" key and public key
I0213 15:09:47.260535 9652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 15:09:47.436955 9652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0213 15:09:47.666880 9652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 15:09:47.758533 9652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 15:09:47.806297 9652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0213 15:09:47.806781 9652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0213 15:09:47.829513 9652 out.go:204] - Booting up control plane ...
I0213 15:09:47.829592 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 15:09:47.829680 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 15:09:47.829743 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 15:09:47.829813 9652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 15:09:47.829940 9652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0213 15:10:27.817636 9652 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0213 15:10:27.818625 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:10:27.818860 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:10:32.819769 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:10:32.819944 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:10:42.821271 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:10:42.821553 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:11:02.824119 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:11:02.824346 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:11:42.824226 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:11:42.824404 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:11:42.824427 9652 kubeadm.go:322]
I0213 15:11:42.824470 9652 kubeadm.go:322] Unfortunately, an error has occurred:
I0213 15:11:42.824512 9652 kubeadm.go:322] timed out waiting for the condition
I0213 15:11:42.824517 9652 kubeadm.go:322]
I0213 15:11:42.824568 9652 kubeadm.go:322] This error is likely caused by:
I0213 15:11:42.824597 9652 kubeadm.go:322] - The kubelet is not running
I0213 15:11:42.824706 9652 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0213 15:11:42.824719 9652 kubeadm.go:322]
I0213 15:11:42.824813 9652 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0213 15:11:42.824845 9652 kubeadm.go:322] - 'systemctl status kubelet'
I0213 15:11:42.824881 9652 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0213 15:11:42.824890 9652 kubeadm.go:322]
I0213 15:11:42.824994 9652 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0213 15:11:42.825063 9652 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0213 15:11:42.825076 9652 kubeadm.go:322]
I0213 15:11:42.825145 9652 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0213 15:11:42.825182 9652 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0213 15:11:42.825241 9652 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0213 15:11:42.825274 9652 kubeadm.go:322] - 'docker logs CONTAINERID'
I0213 15:11:42.825282 9652 kubeadm.go:322]
I0213 15:11:42.829395 9652 kubeadm.go:322] W0213 23:09:45.178564 1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0213 15:11:42.829632 9652 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0213 15:11:42.829728 9652 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0213 15:11:42.829842 9652 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0213 15:11:42.829920 9652 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0213 15:11:42.830013 9652 kubeadm.go:322] W0213 23:09:47.812069 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 15:11:42.830101 9652 kubeadm.go:322] W0213 23:09:47.812889 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 15:11:42.830166 9652 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0213 15:11:42.830227 9652 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0213 15:11:42.830334 9652 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:09:45.178564 1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:09:47.812069 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:09:47.812889 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-181000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:09:45.178564 1705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:09:47.812069 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:09:47.812889 1705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0213 15:11:42.830371 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0213 15:11:43.257072 9652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0213 15:11:43.274286 9652 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0213 15:11:43.274356 9652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 15:11:43.289388 9652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 15:11:43.289416 9652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 15:11:43.342688 9652 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0213 15:11:43.342760 9652 kubeadm.go:322] [preflight] Running pre-flight checks
I0213 15:11:43.576339 9652 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0213 15:11:43.576432 9652 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0213 15:11:43.576523 9652 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0213 15:11:43.742890 9652 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0213 15:11:43.743426 9652 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0213 15:11:43.743465 9652 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0213 15:11:43.815304 9652 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0213 15:11:43.836993 9652 out.go:204] - Generating certificates and keys ...
I0213 15:11:43.837120 9652 kubeadm.go:322] [certs] Using existing ca certificate authority
I0213 15:11:43.837183 9652 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0213 15:11:43.837300 9652 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0213 15:11:43.837348 9652 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0213 15:11:43.837400 9652 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0213 15:11:43.837441 9652 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0213 15:11:43.837494 9652 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0213 15:11:43.837543 9652 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0213 15:11:43.837624 9652 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0213 15:11:43.837743 9652 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0213 15:11:43.837779 9652 kubeadm.go:322] [certs] Using the existing "sa" key
I0213 15:11:43.837819 9652 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 15:11:43.863969 9652 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0213 15:11:44.043777 9652 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 15:11:44.153406 9652 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 15:11:44.234967 9652 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0213 15:11:44.235380 9652 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0213 15:11:44.256840 9652 out.go:204] - Booting up control plane ...
I0213 15:11:44.256906 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 15:11:44.256961 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 15:11:44.257003 9652 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 15:11:44.257068 9652 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 15:11:44.257201 9652 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0213 15:12:24.244520 9652 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0213 15:12:24.245417 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:12:24.245554 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:12:29.247099 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:12:29.247344 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:12:39.249051 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:12:39.249203 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:12:59.250589 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:12:59.250769 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:13:39.252723 9652 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 15:13:39.252897 9652 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 15:13:39.252911 9652 kubeadm.go:322]
I0213 15:13:39.252940 9652 kubeadm.go:322] Unfortunately, an error has occurred:
I0213 15:13:39.252988 9652 kubeadm.go:322] timed out waiting for the condition
I0213 15:13:39.252999 9652 kubeadm.go:322]
I0213 15:13:39.253026 9652 kubeadm.go:322] This error is likely caused by:
I0213 15:13:39.253051 9652 kubeadm.go:322] - The kubelet is not running
I0213 15:13:39.253146 9652 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0213 15:13:39.253159 9652 kubeadm.go:322]
I0213 15:13:39.253243 9652 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0213 15:13:39.253285 9652 kubeadm.go:322] - 'systemctl status kubelet'
I0213 15:13:39.253339 9652 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0213 15:13:39.253347 9652 kubeadm.go:322]
I0213 15:13:39.253433 9652 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0213 15:13:39.253506 9652 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0213 15:13:39.253512 9652 kubeadm.go:322]
I0213 15:13:39.253578 9652 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0213 15:13:39.253617 9652 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0213 15:13:39.253674 9652 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0213 15:13:39.253705 9652 kubeadm.go:322] - 'docker logs CONTAINERID'
I0213 15:13:39.253713 9652 kubeadm.go:322]
I0213 15:13:39.257612 9652 kubeadm.go:322] W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0213 15:13:39.257749 9652 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0213 15:13:39.257822 9652 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0213 15:13:39.257931 9652 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0213 15:13:39.258022 9652 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0213 15:13:39.258115 9652 kubeadm.go:322] W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 15:13:39.258206 9652 kubeadm.go:322] W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 15:13:39.258266 9652 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0213 15:13:39.258333 9652 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0213 15:13:39.258360 9652 kubeadm.go:406] StartCluster complete in 3m54.198478239s
I0213 15:13:39.258445 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:13:39.276597 9652 logs.go:276] 0 containers: []
W0213 15:13:39.276618 9652 logs.go:278] No container was found matching "kube-apiserver"
I0213 15:13:39.276710 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:13:39.294670 9652 logs.go:276] 0 containers: []
W0213 15:13:39.294684 9652 logs.go:278] No container was found matching "etcd"
I0213 15:13:39.294757 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:13:39.312405 9652 logs.go:276] 0 containers: []
W0213 15:13:39.312419 9652 logs.go:278] No container was found matching "coredns"
I0213 15:13:39.312487 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:13:39.330699 9652 logs.go:276] 0 containers: []
W0213 15:13:39.330712 9652 logs.go:278] No container was found matching "kube-scheduler"
I0213 15:13:39.330788 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:13:39.348796 9652 logs.go:276] 0 containers: []
W0213 15:13:39.348809 9652 logs.go:278] No container was found matching "kube-proxy"
I0213 15:13:39.348887 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:13:39.365395 9652 logs.go:276] 0 containers: []
W0213 15:13:39.365408 9652 logs.go:278] No container was found matching "kube-controller-manager"
I0213 15:13:39.365479 9652 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:13:39.382273 9652 logs.go:276] 0 containers: []
W0213 15:13:39.382286 9652 logs.go:278] No container was found matching "kindnet"
I0213 15:13:39.382294 9652 logs.go:123] Gathering logs for Docker ...
I0213 15:13:39.382302 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:13:39.403643 9652 logs.go:123] Gathering logs for container status ...
I0213 15:13:39.403657 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:13:39.464895 9652 logs.go:123] Gathering logs for kubelet ...
I0213 15:13:39.464909 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:13:39.507120 9652 logs.go:123] Gathering logs for dmesg ...
I0213 15:13:39.507136 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:13:39.526527 9652 logs.go:123] Gathering logs for describe nodes ...
I0213 15:13:39.526541 9652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0213 15:13:39.586905 9652 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0213 15:13:39.586936 9652 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 15:13:39.586956 9652 out.go:239] *
*
W0213 15:13:39.587002 9652 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 15:13:39.587017 9652 out.go:239] *
*
W0213 15:13:39.587644 9652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0213 15:13:39.674487 9652 out.go:177]
W0213 15:13:39.717390 9652 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0213 23:11:43.341596 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0213 23:11:44.240004 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0213 23:11:44.240685 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 15:13:39.717451 9652 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0213 15:13:39.717517 9652 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0213 15:13:39.759415 9652 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-181000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.31s)