=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0108 18:45:24.131135 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:45:50.517589 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.523312 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.535473 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.557832 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.599211 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.681340 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:50.843547 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.165356 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.805732 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:51.822928 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/addons-388000/client.crt: no such file or directory
E0108 18:45:53.086526 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:45:55.646959 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:00.767297 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:11.007936 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:46:31.487972 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
E0108 18:47:12.448991 75369 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/functional-142000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.55641026s)
-- stdout --
* [ingress-addon-legacy-134000] minikube v1.32.0 on Darwin 14.2.1
- MINIKUBE_LOCATION=17866
- KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-134000 in cluster ingress-addon-legacy-134000
* Pulling base image v0.0.42-1704759386-17866 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0108 18:43:17.435562 78141 out.go:296] Setting OutFile to fd 1 ...
I0108 18:43:17.435843 78141 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:43:17.435852 78141 out.go:309] Setting ErrFile to fd 2...
I0108 18:43:17.435859 78141 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 18:43:17.436200 78141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-74927/.minikube/bin
I0108 18:43:17.438258 78141 out.go:303] Setting JSON to false
I0108 18:43:17.465564 78141 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":34969,"bootTime":1704733228,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0108 18:43:17.465679 78141 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0108 18:43:17.486987 78141 out.go:177] * [ingress-addon-legacy-134000] minikube v1.32.0 on Darwin 14.2.1
I0108 18:43:17.509156 78141 notify.go:220] Checking for updates...
I0108 18:43:17.529878 78141 out.go:177] - MINIKUBE_LOCATION=17866
I0108 18:43:17.573745 78141 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17866-74927/kubeconfig
I0108 18:43:17.616791 78141 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 18:43:17.658634 78141 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 18:43:17.679840 78141 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-74927/.minikube
I0108 18:43:17.721761 78141 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0108 18:43:17.743241 78141 driver.go:392] Setting default libvirt URI to qemu:///system
I0108 18:43:17.801579 78141 docker.go:122] docker version: linux-24.0.7:Docker Desktop 4.26.0 (130397)
I0108 18:43:17.801744 78141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 18:43:17.909900 78141 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 02:43:17.899945498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0108 18:43:17.930956 78141 out.go:177] * Using the docker driver based on user configuration
I0108 18:43:17.973074 78141 start.go:298] selected driver: docker
I0108 18:43:17.973104 78141 start.go:902] validating driver "docker" against <nil>
I0108 18:43:17.973118 78141 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 18:43:17.977575 78141 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0108 18:43:18.080151 78141 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:63 SystemTime:2024-01-09 02:43:18.070736133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.5.11-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6221275136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconf
ined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.0-desktop.2] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.3-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Mana
ges Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:0.1] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.10] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/d
ocker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.2.0]] Warnings:<nil>}}
I0108 18:43:18.080355 78141 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I0108 18:43:18.080539 78141 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0108 18:43:18.102049 78141 out.go:177] * Using Docker Desktop driver with root privileges
I0108 18:43:18.122764 78141 cni.go:84] Creating CNI manager for ""
I0108 18:43:18.122804 78141 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0108 18:43:18.122825 78141 start_flags.go:321] config:
{Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0108 18:43:18.145109 78141 out.go:177] * Starting control plane node ingress-addon-legacy-134000 in cluster ingress-addon-legacy-134000
I0108 18:43:18.166947 78141 cache.go:121] Beginning downloading kic base image for docker with docker
I0108 18:43:18.188570 78141 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
I0108 18:43:18.230920 78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 18:43:18.231001 78141 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
I0108 18:43:18.283070 78141 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
I0108 18:43:18.283095 78141 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
I0108 18:43:18.284472 78141 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 18:43:18.284486 78141 cache.go:56] Caching tarball of preloaded images
I0108 18:43:18.284681 78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 18:43:18.306485 78141 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0108 18:43:18.348258 78141 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 18:43:18.427306 78141 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0108 18:43:25.492415 78141 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 18:43:25.492597 78141 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0108 18:43:26.123641 78141 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0108 18:43:26.123966 78141 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json ...
I0108 18:43:26.123993 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json: {Name:mk96f6108a6d2d92aa0942f6b6515cfeb1c7d186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:26.124322 78141 cache.go:194] Successfully downloaded all kic artifacts
I0108 18:43:26.124357 78141 start.go:365] acquiring machines lock for ingress-addon-legacy-134000: {Name:mk10b614d1fdefcebb96221272b7d22008caaa38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 18:43:26.124487 78141 start.go:369] acquired machines lock for "ingress-addon-legacy-134000" in 122.978µs
I0108 18:43:26.124509 78141 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 18:43:26.124580 78141 start.go:125] createHost starting for "" (driver="docker")
I0108 18:43:26.176315 78141 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0108 18:43:26.176614 78141 start.go:159] libmachine.API.Create for "ingress-addon-legacy-134000" (driver="docker")
I0108 18:43:26.176663 78141 client.go:168] LocalClient.Create starting
I0108 18:43:26.176862 78141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem
I0108 18:43:26.176954 78141 main.go:141] libmachine: Decoding PEM data...
I0108 18:43:26.176985 78141 main.go:141] libmachine: Parsing certificate...
I0108 18:43:26.177075 78141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem
I0108 18:43:26.177148 78141 main.go:141] libmachine: Decoding PEM data...
I0108 18:43:26.177165 78141 main.go:141] libmachine: Parsing certificate...
I0108 18:43:26.177982 78141 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-134000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0108 18:43:26.233731 78141 cli_runner.go:211] docker network inspect ingress-addon-legacy-134000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0108 18:43:26.233864 78141 network_create.go:281] running [docker network inspect ingress-addon-legacy-134000] to gather additional debugging logs...
I0108 18:43:26.233887 78141 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-134000
W0108 18:43:26.285670 78141 cli_runner.go:211] docker network inspect ingress-addon-legacy-134000 returned with exit code 1
I0108 18:43:26.285716 78141 network_create.go:284] error running [docker network inspect ingress-addon-legacy-134000]: docker network inspect ingress-addon-legacy-134000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-134000 not found
I0108 18:43:26.285739 78141 network_create.go:286] output of [docker network inspect ingress-addon-legacy-134000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-134000 not found
** /stderr **
I0108 18:43:26.285903 78141 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0108 18:43:26.337016 78141 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00061a3d0}
I0108 18:43:26.337058 78141 network_create.go:124] attempt to create docker network ingress-addon-legacy-134000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0108 18:43:26.337130 78141 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 ingress-addon-legacy-134000
I0108 18:43:26.422752 78141 network_create.go:108] docker network ingress-addon-legacy-134000 192.168.49.0/24 created
I0108 18:43:26.422812 78141 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-134000" container
I0108 18:43:26.422922 78141 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0108 18:43:26.474090 78141 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-134000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --label created_by.minikube.sigs.k8s.io=true
I0108 18:43:26.525781 78141 oci.go:103] Successfully created a docker volume ingress-addon-legacy-134000
I0108 18:43:26.525924 78141 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-134000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --entrypoint /usr/bin/test -v ingress-addon-legacy-134000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
I0108 18:43:26.913805 78141 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-134000
I0108 18:43:26.913871 78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 18:43:26.913885 78141 kic.go:194] Starting extracting preloaded images to volume ...
I0108 18:43:26.914005 78141 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-134000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
I0108 18:43:29.144215 78141 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-134000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.23015887s)
I0108 18:43:29.144239 78141 kic.go:203] duration metric: took 2.230374 seconds to extract preloaded images to volume
I0108 18:43:29.144357 78141 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0108 18:43:29.244509 78141 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-134000 --name ingress-addon-legacy-134000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-134000 --network ingress-addon-legacy-134000 --ip 192.168.49.2 --volume ingress-addon-legacy-134000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
I0108 18:43:29.520261 78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Running}}
I0108 18:43:29.574612 78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
I0108 18:43:29.631576 78141 cli_runner.go:164] Run: docker exec ingress-addon-legacy-134000 stat /var/lib/dpkg/alternatives/iptables
I0108 18:43:29.791731 78141 oci.go:144] the created container "ingress-addon-legacy-134000" has a running status.
I0108 18:43:29.791777 78141 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa...
I0108 18:43:29.937747 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0108 18:43:29.937813 78141 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0108 18:43:30.000420 78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
I0108 18:43:30.055095 78141 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0108 18:43:30.055116 78141 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-134000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0108 18:43:30.153079 78141 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-134000 --format={{.State.Status}}
I0108 18:43:30.205079 78141 machine.go:88] provisioning docker machine ...
I0108 18:43:30.205129 78141 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-134000"
I0108 18:43:30.205243 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:30.256351 78141 main.go:141] libmachine: Using SSH client type: native
I0108 18:43:30.256685 78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 63373 <nil> <nil>}
I0108 18:43:30.256702 78141 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-134000 && echo "ingress-addon-legacy-134000" | sudo tee /etc/hostname
I0108 18:43:30.401949 78141 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-134000
I0108 18:43:30.402064 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:30.454199 78141 main.go:141] libmachine: Using SSH client type: native
I0108 18:43:30.454493 78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 63373 <nil> <nil>}
I0108 18:43:30.454520 78141 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-134000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-134000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-134000' | sudo tee -a /etc/hosts;
fi
fi
I0108 18:43:30.588685 78141 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0108 18:43:30.588710 78141 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-74927/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-74927/.minikube}
I0108 18:43:30.588731 78141 ubuntu.go:177] setting up certificates
I0108 18:43:30.588744 78141 provision.go:83] configureAuth start
I0108 18:43:30.588824 78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
I0108 18:43:30.639879 78141 provision.go:138] copyHostCerts
I0108 18:43:30.639926 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
I0108 18:43:30.639983 78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem, removing ...
I0108 18:43:30.639991 78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem
I0108 18:43:30.640117 78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.pem (1078 bytes)
I0108 18:43:30.640311 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
I0108 18:43:30.640341 78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem, removing ...
I0108 18:43:30.640346 78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem
I0108 18:43:30.640445 78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/cert.pem (1123 bytes)
I0108 18:43:30.640584 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
I0108 18:43:30.640622 78141 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem, removing ...
I0108 18:43:30.640626 78141 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem
I0108 18:43:30.640731 78141 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-74927/.minikube/key.pem (1679 bytes)
I0108 18:43:30.640916 78141 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-134000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-134000]
I0108 18:43:30.743500 78141 provision.go:172] copyRemoteCerts
I0108 18:43:30.743549 78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 18:43:30.743608 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:30.795579 78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
I0108 18:43:30.890866 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0108 18:43:30.890951 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0108 18:43:30.910641 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem -> /etc/docker/server.pem
I0108 18:43:30.910715 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0108 18:43:30.931061 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0108 18:43:30.931147 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 18:43:30.950987 78141 provision.go:86] duration metric: configureAuth took 362.230814ms
I0108 18:43:30.951002 78141 ubuntu.go:193] setting minikube options for container-runtime
I0108 18:43:30.951146 78141 config.go:182] Loaded profile config "ingress-addon-legacy-134000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0108 18:43:30.951226 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:31.003150 78141 main.go:141] libmachine: Using SSH client type: native
I0108 18:43:31.003448 78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 63373 <nil> <nil>}
I0108 18:43:31.003467 78141 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 18:43:31.138734 78141 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0108 18:43:31.138761 78141 ubuntu.go:71] root file system type: overlay
I0108 18:43:31.138862 78141 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 18:43:31.138952 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:31.190708 78141 main.go:141] libmachine: Using SSH client type: native
I0108 18:43:31.191024 78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 63373 <nil> <nil>}
I0108 18:43:31.191077 78141 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 18:43:31.333563 78141 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 18:43:31.333658 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:31.386114 78141 main.go:141] libmachine: Using SSH client type: native
I0108 18:43:31.386418 78141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 127.0.0.1 63373 <nil> <nil>}
I0108 18:43:31.386431 78141 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 18:43:31.953338 78141 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-01-09 02:43:31.331772822 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0108 18:43:31.953366 78141 machine.go:91] provisioned docker machine in 1.748279329s
I0108 18:43:31.953381 78141 client.go:171] LocalClient.Create took 5.776764198s
I0108 18:43:31.953401 78141 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-134000" took 5.776841772s
I0108 18:43:31.953409 78141 start.go:300] post-start starting for "ingress-addon-legacy-134000" (driver="docker")
I0108 18:43:31.953417 78141 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 18:43:31.953482 78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 18:43:31.953544 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:32.006087 78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
I0108 18:43:32.101475 78141 ssh_runner.go:195] Run: cat /etc/os-release
I0108 18:43:32.105350 78141 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0108 18:43:32.105376 78141 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0108 18:43:32.105384 78141 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0108 18:43:32.105389 78141 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0108 18:43:32.105404 78141 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/addons for local assets ...
I0108 18:43:32.105508 78141 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-74927/.minikube/files for local assets ...
I0108 18:43:32.105697 78141 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> 753692.pem in /etc/ssl/certs
I0108 18:43:32.105704 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> /etc/ssl/certs/753692.pem
I0108 18:43:32.105909 78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 18:43:32.113825 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /etc/ssl/certs/753692.pem (1708 bytes)
I0108 18:43:32.133921 78141 start.go:303] post-start completed in 180.504421ms
I0108 18:43:32.134521 78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
I0108 18:43:32.185949 78141 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/config.json ...
I0108 18:43:32.186433 78141 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0108 18:43:32.186499 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:32.237582 78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
I0108 18:43:32.329850 78141 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0108 18:43:32.334616 78141 start.go:128] duration metric: createHost completed in 6.210080897s
I0108 18:43:32.334633 78141 start.go:83] releasing machines lock for "ingress-addon-legacy-134000", held for 6.210194051s
I0108 18:43:32.334710 78141 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-134000
I0108 18:43:32.385908 78141 ssh_runner.go:195] Run: cat /version.json
I0108 18:43:32.385935 78141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 18:43:32.385984 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:32.386022 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:32.466370 78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
I0108 18:43:32.466369 78141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63373 SSHKeyPath:/Users/jenkins/minikube-integration/17866-74927/.minikube/machines/ingress-addon-legacy-134000/id_rsa Username:docker}
I0108 18:43:32.666384 78141 ssh_runner.go:195] Run: systemctl --version
I0108 18:43:32.671356 78141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0108 18:43:32.676242 78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0108 18:43:32.697676 78141 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0108 18:43:32.697752 78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0108 18:43:32.712581 78141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0108 18:43:32.727276 78141 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0108 18:43:32.727291 78141 start.go:475] detecting cgroup driver to use...
I0108 18:43:32.727303 78141 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0108 18:43:32.727428 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 18:43:32.741822 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0108 18:43:32.750888 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0108 18:43:32.760015 78141 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0108 18:43:32.760073 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0108 18:43:32.769434 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 18:43:32.778620 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0108 18:43:32.787820 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 18:43:32.796876 78141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0108 18:43:32.805528 78141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0108 18:43:32.814746 78141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 18:43:32.822648 78141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 18:43:32.830598 78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 18:43:32.879051 78141 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 18:43:32.965322 78141 start.go:475] detecting cgroup driver to use...
I0108 18:43:32.965345 78141 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0108 18:43:32.965414 78141 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 18:43:32.983717 78141 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0108 18:43:32.983784 78141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 18:43:32.994811 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0108 18:43:33.010822 78141 ssh_runner.go:195] Run: which cri-dockerd
I0108 18:43:33.015239 78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0108 18:43:33.024707 78141 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0108 18:43:33.041304 78141 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 18:43:33.115738 78141 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 18:43:33.205469 78141 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0108 18:43:33.205562 78141 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0108 18:43:33.222087 78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 18:43:33.304156 78141 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 18:43:33.537587 78141 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 18:43:33.561066 78141 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 18:43:33.608790 78141 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0108 18:43:33.608891 78141 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-134000 dig +short host.docker.internal
I0108 18:43:33.724911 78141 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0108 18:43:33.725011 78141 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0108 18:43:33.729601 78141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 18:43:33.739780 78141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-134000
I0108 18:43:33.790962 78141 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0108 18:43:33.791032 78141 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 18:43:33.808202 78141 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 18:43:33.808231 78141 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0108 18:43:33.808309 78141 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 18:43:33.816755 78141 ssh_runner.go:195] Run: which lz4
I0108 18:43:33.820699 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0108 18:43:33.820837 78141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0108 18:43:33.824751 78141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0108 18:43:33.824777 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0108 18:43:39.417321 78141 docker.go:635] Took 5.596583 seconds to copy over tarball
I0108 18:43:39.417400 78141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0108 18:43:41.027041 78141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.609631611s)
I0108 18:43:41.027056 78141 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0108 18:43:41.070422 78141 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 18:43:41.078651 78141 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0108 18:43:41.093571 78141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 18:43:41.147743 78141 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 18:43:42.220175 78141 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.072417347s)
I0108 18:43:42.220263 78141 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 18:43:42.239218 78141 docker.go:671] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0108 18:43:42.239231 78141 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0108 18:43:42.239241 78141 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0108 18:43:42.244344 78141 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 18:43:42.244353 78141 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 18:43:42.244552 78141 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 18:43:42.244597 78141 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0108 18:43:42.244732 78141 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 18:43:42.244858 78141 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0108 18:43:42.244979 78141 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0108 18:43:42.245242 78141 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0108 18:43:42.249223 78141 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0108 18:43:42.249218 78141 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 18:43:42.250854 78141 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 18:43:42.251219 78141 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0108 18:43:42.251475 78141 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0108 18:43:42.251522 78141 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0108 18:43:42.251559 78141 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0108 18:43:42.251536 78141 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 18:43:42.694752 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0108 18:43:42.712890 78141 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0108 18:43:42.712937 78141 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0108 18:43:42.712992 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0108 18:43:42.730570 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0108 18:43:42.731489 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0108 18:43:42.738490 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0108 18:43:42.749761 78141 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0108 18:43:42.749793 78141 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0108 18:43:42.749861 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0108 18:43:42.756292 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0108 18:43:42.759615 78141 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0108 18:43:42.759639 78141 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
I0108 18:43:42.759705 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0108 18:43:42.768873 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0108 18:43:42.773494 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0108 18:43:42.779751 78141 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0108 18:43:42.779785 78141 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0108 18:43:42.779883 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0108 18:43:42.783075 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0108 18:43:42.793062 78141 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0108 18:43:42.793097 78141 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
I0108 18:43:42.793177 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0108 18:43:42.801040 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0108 18:43:42.804252 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0108 18:43:42.814754 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0108 18:43:42.822946 78141 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0108 18:43:42.822973 78141 docker.go:323] Removing image: registry.k8s.io/pause:3.2
I0108 18:43:42.823046 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0108 18:43:42.840360 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0108 18:43:42.935842 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0108 18:43:42.955494 78141 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0108 18:43:42.955529 78141 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0108 18:43:42.955595 78141 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0108 18:43:42.973329 78141 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0108 18:43:43.202318 78141 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0108 18:43:43.221864 78141 cache_images.go:92] LoadImages completed in 982.618104ms
W0108 18:43:43.221914 78141 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17866-74927/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
I0108 18:43:43.221994 78141 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 18:43:43.269383 78141 cni.go:84] Creating CNI manager for ""
I0108 18:43:43.269400 78141 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0108 18:43:43.269418 78141 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 18:43:43.269435 78141 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-134000 NodeName:ingress-addon-legacy-134000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0108 18:43:43.269543 78141 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-134000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 18:43:43.269594 78141 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-134000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 18:43:43.269651 78141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0108 18:43:43.277836 78141 binaries.go:44] Found k8s binaries, skipping transfer
I0108 18:43:43.277897 78141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 18:43:43.286127 78141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0108 18:43:43.301218 78141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0108 18:43:43.316864 78141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0108 18:43:43.332062 78141 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0108 18:43:43.336149 78141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 18:43:43.346467 78141 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000 for IP: 192.168.49.2
I0108 18:43:43.346487 78141 certs.go:190] acquiring lock for shared ca certs: {Name:mk44dcbca6ce5cf77b3bf5ce2248b699d6553e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:43.346673 78141 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key
I0108 18:43:43.346741 78141 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key
I0108 18:43:43.346808 78141 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key
I0108 18:43:43.346826 78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt with IP's: []
I0108 18:43:43.490281 78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt ...
I0108 18:43:43.490292 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.crt: {Name:mk787ba0b3882cc83956a94e4139ac44fd191304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:43.490627 78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key ...
I0108 18:43:43.490637 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/client.key: {Name:mk2a2cf0cf19a48cf28a8dc0d02263196e7191e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:43.490906 78141 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2
I0108 18:43:43.490922 78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0108 18:43:43.900393 78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 ...
I0108 18:43:43.900411 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2: {Name:mka578b65beb0dab13d354f4e15c4fe7cbd91dc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:43.900719 78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2 ...
I0108 18:43:43.900729 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2: {Name:mk8e7102f6af35d10ac90492ab01b0faa12e31fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:43.900946 78141 certs.go:337] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt
I0108 18:43:43.901130 78141 certs.go:341] copying /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key
I0108 18:43:43.901302 78141 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key
I0108 18:43:43.901317 78141 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt with IP's: []
I0108 18:43:44.146675 78141 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt ...
I0108 18:43:44.146686 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt: {Name:mkc4f31e23d5f93d67fe805f12d900b8c6b58c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:44.146957 78141 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key ...
I0108 18:43:44.146966 78141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key: {Name:mk3abbebc9fc3110f7c2d8b1e682879a566efc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 18:43:44.147173 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0108 18:43:44.147202 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0108 18:43:44.147221 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0108 18:43:44.147238 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0108 18:43:44.147258 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0108 18:43:44.147275 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0108 18:43:44.147294 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0108 18:43:44.147311 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0108 18:43:44.147406 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem (1338 bytes)
W0108 18:43:44.147460 78141 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369_empty.pem, impossibly tiny 0 bytes
I0108 18:43:44.147470 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca-key.pem (1675 bytes)
I0108 18:43:44.147503 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/ca.pem (1078 bytes)
I0108 18:43:44.147533 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/cert.pem (1123 bytes)
I0108 18:43:44.147563 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/certs/key.pem (1679 bytes)
I0108 18:43:44.147625 78141 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem (1708 bytes)
I0108 18:43:44.147663 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0108 18:43:44.147683 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem -> /usr/share/ca-certificates/75369.pem
I0108 18:43:44.147728 78141 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem -> /usr/share/ca-certificates/753692.pem
I0108 18:43:44.148209 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 18:43:44.169002 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0108 18:43:44.188874 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 18:43:44.209076 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/profiles/ingress-addon-legacy-134000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0108 18:43:44.229137 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 18:43:44.249326 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0108 18:43:44.269546 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 18:43:44.289831 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0108 18:43:44.310231 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 18:43:44.330656 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/certs/75369.pem --> /usr/share/ca-certificates/75369.pem (1338 bytes)
I0108 18:43:44.350630 78141 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-74927/.minikube/files/etc/ssl/certs/753692.pem --> /usr/share/ca-certificates/753692.pem (1708 bytes)
I0108 18:43:44.370609 78141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 18:43:44.385949 78141 ssh_runner.go:195] Run: openssl version
I0108 18:43:44.391132 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 18:43:44.400001 78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 18:43:44.404071 78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 9 02:33 /usr/share/ca-certificates/minikubeCA.pem
I0108 18:43:44.404116 78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 18:43:44.410573 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 18:43:44.419553 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75369.pem && ln -fs /usr/share/ca-certificates/75369.pem /etc/ssl/certs/75369.pem"
I0108 18:43:44.428642 78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75369.pem
I0108 18:43:44.432645 78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 9 02:38 /usr/share/ca-certificates/75369.pem
I0108 18:43:44.432694 78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75369.pem
I0108 18:43:44.438891 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/75369.pem /etc/ssl/certs/51391683.0"
I0108 18:43:44.447692 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/753692.pem && ln -fs /usr/share/ca-certificates/753692.pem /etc/ssl/certs/753692.pem"
I0108 18:43:44.456412 78141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/753692.pem
I0108 18:43:44.460494 78141 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 9 02:38 /usr/share/ca-certificates/753692.pem
I0108 18:43:44.460540 78141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/753692.pem
I0108 18:43:44.466904 78141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/753692.pem /etc/ssl/certs/3ec20f2e.0"
I0108 18:43:44.475759 78141 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0108 18:43:44.479766 78141 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0108 18:43:44.479815 78141 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-134000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-134000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0108 18:43:44.479914 78141 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 18:43:44.498524 78141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 18:43:44.507164 78141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 18:43:44.515368 78141 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0108 18:43:44.515422 78141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 18:43:44.523432 78141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 18:43:44.523462 78141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 18:43:44.583613 78141 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0108 18:43:44.583666 78141 kubeadm.go:322] [preflight] Running pre-flight checks
I0108 18:43:44.815135 78141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 18:43:44.815224 78141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 18:43:44.815301 78141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 18:43:44.975543 78141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 18:43:44.976189 78141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 18:43:44.976233 78141 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0108 18:43:45.053303 78141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 18:43:45.075152 78141 out.go:204] - Generating certificates and keys ...
I0108 18:43:45.075224 78141 kubeadm.go:322] [certs] Using existing ca certificate authority
I0108 18:43:45.075284 78141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0108 18:43:45.241370 78141 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0108 18:43:45.310312 78141 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0108 18:43:45.435612 78141 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0108 18:43:45.608363 78141 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0108 18:43:45.685396 78141 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0108 18:43:45.685528 78141 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 18:43:45.864412 78141 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0108 18:43:45.864525 78141 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0108 18:43:46.009609 78141 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0108 18:43:46.070466 78141 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0108 18:43:46.196400 78141 kubeadm.go:322] [certs] Generating "sa" key and public key
I0108 18:43:46.196451 78141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 18:43:46.300675 78141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 18:43:46.391593 78141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 18:43:46.663240 78141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 18:43:46.799271 78141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 18:43:46.799823 78141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 18:43:46.843419 78141 out.go:204] - Booting up control plane ...
I0108 18:43:46.843592 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 18:43:46.843723 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 18:43:46.843866 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 18:43:46.844016 78141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 18:43:46.844239 78141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 18:44:26.809301 78141 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0108 18:44:26.810039 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:44:26.810283 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:44:31.811819 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:44:31.812011 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:44:41.813371 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:44:41.813583 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:45:01.815179 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:45:01.815433 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:45:41.816902 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:45:41.817202 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:45:41.817226 78141 kubeadm.go:322]
I0108 18:45:41.817271 78141 kubeadm.go:322] Unfortunately, an error has occurred:
I0108 18:45:41.817322 78141 kubeadm.go:322] timed out waiting for the condition
I0108 18:45:41.817330 78141 kubeadm.go:322]
I0108 18:45:41.817370 78141 kubeadm.go:322] This error is likely caused by:
I0108 18:45:41.817404 78141 kubeadm.go:322] - The kubelet is not running
I0108 18:45:41.817535 78141 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 18:45:41.817550 78141 kubeadm.go:322]
I0108 18:45:41.817673 78141 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 18:45:41.817718 78141 kubeadm.go:322] - 'systemctl status kubelet'
I0108 18:45:41.817751 78141 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0108 18:45:41.817757 78141 kubeadm.go:322]
I0108 18:45:41.817906 78141 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 18:45:41.818031 78141 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 18:45:41.818061 78141 kubeadm.go:322]
I0108 18:45:41.818149 78141 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0108 18:45:41.818203 78141 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0108 18:45:41.818285 78141 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0108 18:45:41.818332 78141 kubeadm.go:322] - 'docker logs CONTAINERID'
I0108 18:45:41.818345 78141 kubeadm.go:322]
I0108 18:45:41.819529 78141 kubeadm.go:322] W0109 02:43:44.582840 1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 18:45:41.819680 78141 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0108 18:45:41.819763 78141 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 18:45:41.819889 78141 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0108 18:45:41.819991 78141 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 18:45:41.820117 78141 kubeadm.go:322] W0109 02:43:46.803454 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 18:45:41.820226 78141 kubeadm.go:322] W0109 02:43:46.804233 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 18:45:41.820295 78141 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 18:45:41.820369 78141 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0108 18:45:41.820453 78141 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:43:44.582840 1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:43:46.803454 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:43:46.804233 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-134000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:43:44.582840 1701 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:43:46.803454 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:43:46.804233 1701 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0108 18:45:41.820494 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0108 18:45:42.238492 78141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 18:45:42.248876 78141 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0108 18:45:42.248931 78141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 18:45:42.257156 78141 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 18:45:42.257184 78141 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0108 18:45:42.310128 78141 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0108 18:45:42.310169 78141 kubeadm.go:322] [preflight] Running pre-flight checks
I0108 18:45:42.534617 78141 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 18:45:42.534709 78141 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 18:45:42.534783 78141 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 18:45:42.701434 78141 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 18:45:42.702058 78141 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 18:45:42.702112 78141 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0108 18:45:42.773597 78141 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 18:45:42.794908 78141 out.go:204] - Generating certificates and keys ...
I0108 18:45:42.795000 78141 kubeadm.go:322] [certs] Using existing ca certificate authority
I0108 18:45:42.795066 78141 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0108 18:45:42.795141 78141 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0108 18:45:42.795197 78141 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0108 18:45:42.795259 78141 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0108 18:45:42.795300 78141 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0108 18:45:42.795358 78141 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0108 18:45:42.795404 78141 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0108 18:45:42.795459 78141 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0108 18:45:42.795523 78141 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0108 18:45:42.795556 78141 kubeadm.go:322] [certs] Using the existing "sa" key
I0108 18:45:42.795623 78141 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 18:45:42.928986 78141 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 18:45:42.995364 78141 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 18:45:43.054138 78141 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 18:45:43.391042 78141 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 18:45:43.391462 78141 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 18:45:43.412782 78141 out.go:204] - Booting up control plane ...
I0108 18:45:43.412922 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 18:45:43.413071 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 18:45:43.413182 78141 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 18:45:43.413320 78141 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 18:45:43.413595 78141 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 18:46:23.399917 78141 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0108 18:46:23.400739 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:46:23.400950 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:46:28.401863 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:46:28.402101 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:46:38.403712 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:46:38.403947 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:46:58.404263 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:46:58.404459 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:47:38.404949 78141 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0108 18:47:38.405229 78141 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0108 18:47:38.405246 78141 kubeadm.go:322]
I0108 18:47:38.405311 78141 kubeadm.go:322] Unfortunately, an error has occurred:
I0108 18:47:38.405355 78141 kubeadm.go:322] timed out waiting for the condition
I0108 18:47:38.405363 78141 kubeadm.go:322]
I0108 18:47:38.405395 78141 kubeadm.go:322] This error is likely caused by:
I0108 18:47:38.405430 78141 kubeadm.go:322] - The kubelet is not running
I0108 18:47:38.405524 78141 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0108 18:47:38.405531 78141 kubeadm.go:322]
I0108 18:47:38.405614 78141 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0108 18:47:38.405644 78141 kubeadm.go:322] - 'systemctl status kubelet'
I0108 18:47:38.405669 78141 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0108 18:47:38.405675 78141 kubeadm.go:322]
I0108 18:47:38.405756 78141 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0108 18:47:38.405825 78141 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0108 18:47:38.405830 78141 kubeadm.go:322]
I0108 18:47:38.405907 78141 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0108 18:47:38.405969 78141 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0108 18:47:38.406061 78141 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0108 18:47:38.406095 78141 kubeadm.go:322] - 'docker logs CONTAINERID'
I0108 18:47:38.406105 78141 kubeadm.go:322]
I0108 18:47:38.407203 78141 kubeadm.go:322] W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0108 18:47:38.407349 78141 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0108 18:47:38.407421 78141 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0108 18:47:38.407547 78141 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0108 18:47:38.407633 78141 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 18:47:38.407740 78141 kubeadm.go:322] W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 18:47:38.407846 78141 kubeadm.go:322] W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0108 18:47:38.407919 78141 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0108 18:47:38.408019 78141 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0108 18:47:38.408042 78141 kubeadm.go:406] StartCluster complete in 3m53.930315785s
I0108 18:47:38.408136 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0108 18:47:38.425681 78141 logs.go:284] 0 containers: []
W0108 18:47:38.425695 78141 logs.go:286] No container was found matching "kube-apiserver"
I0108 18:47:38.425764 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0108 18:47:38.443299 78141 logs.go:284] 0 containers: []
W0108 18:47:38.443316 78141 logs.go:286] No container was found matching "etcd"
I0108 18:47:38.443403 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0108 18:47:38.461312 78141 logs.go:284] 0 containers: []
W0108 18:47:38.461325 78141 logs.go:286] No container was found matching "coredns"
I0108 18:47:38.461405 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0108 18:47:38.478596 78141 logs.go:284] 0 containers: []
W0108 18:47:38.478610 78141 logs.go:286] No container was found matching "kube-scheduler"
I0108 18:47:38.478677 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0108 18:47:38.496189 78141 logs.go:284] 0 containers: []
W0108 18:47:38.496204 78141 logs.go:286] No container was found matching "kube-proxy"
I0108 18:47:38.496272 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0108 18:47:38.512989 78141 logs.go:284] 0 containers: []
W0108 18:47:38.513008 78141 logs.go:286] No container was found matching "kube-controller-manager"
I0108 18:47:38.513097 78141 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0108 18:47:38.530398 78141 logs.go:284] 0 containers: []
W0108 18:47:38.530412 78141 logs.go:286] No container was found matching "kindnet"
I0108 18:47:38.530420 78141 logs.go:123] Gathering logs for kubelet ...
I0108 18:47:38.530426 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0108 18:47:38.565181 78141 logs.go:123] Gathering logs for dmesg ...
I0108 18:47:38.565196 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0108 18:47:38.577476 78141 logs.go:123] Gathering logs for describe nodes ...
I0108 18:47:38.577492 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0108 18:47:38.645012 78141 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0108 18:47:38.645028 78141 logs.go:123] Gathering logs for Docker ...
I0108 18:47:38.645039 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0108 18:47:38.660129 78141 logs.go:123] Gathering logs for container status ...
I0108 18:47:38.660145 78141 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0108 18:47:38.706684 78141 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 18:47:38.706710 78141 out.go:239] *
*
W0108 18:47:38.706751 78141 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 18:47:38.706766 78141 out.go:239] *
*
W0108 18:47:38.707389 78141 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 18:47:38.790984 78141 out.go:177]
W0108 18:47:38.832847 78141 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0109 02:45:42.309787 4705 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0109 02:45:43.395871 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0109 02:45:43.396578 4705 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0108 18:47:38.832908 78141 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0108 18:47:38.832933 78141 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0108 18:47:38.854021 78141 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-134000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.60s)