=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0226 02:42:59.877014 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:43:27.564326 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/addons-108000/client.crt: no such file or directory
E0226 02:43:32.475733 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.480838 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.491276 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.511354 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.551530 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.631851 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:32.791953 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:33.113945 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:33.754179 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:35.034304 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:37.594539 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:42.714860 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:43:52.955103 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:44:13.435219 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
E0226 02:44:54.395293 10026 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/functional-349000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m39.107226685s)
-- stdout --
* [ingress-addon-legacy-138000] minikube v1.32.0 on Darwin 14.3.1
- MINIKUBE_LOCATION=18222
- KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-138000 in cluster ingress-addon-legacy-138000
* Pulling base image v0.0.42-1708008208-17936 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0226 02:41:08.332202 12956 out.go:291] Setting OutFile to fd 1 ...
I0226 02:41:08.332458 12956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:41:08.332464 12956 out.go:304] Setting ErrFile to fd 2...
I0226 02:41:08.332467 12956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0226 02:41:08.332665 12956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18222-9538/.minikube/bin
I0226 02:41:08.334107 12956 out.go:298] Setting JSON to false
I0226 02:41:08.359686 12956 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9639,"bootTime":1708934429,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0226 02:41:08.359776 12956 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0226 02:41:08.380960 12956 out.go:177] * [ingress-addon-legacy-138000] minikube v1.32.0 on Darwin 14.3.1
I0226 02:41:08.422757 12956 out.go:177] - MINIKUBE_LOCATION=18222
I0226 02:41:08.422786 12956 notify.go:220] Checking for updates...
I0226 02:41:08.465801 12956 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18222-9538/kubeconfig
I0226 02:41:08.486676 12956 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0226 02:41:08.507976 12956 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0226 02:41:08.528796 12956 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18222-9538/.minikube
I0226 02:41:08.549574 12956 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0226 02:41:08.573250 12956 driver.go:392] Setting default libvirt URI to qemu:///system
I0226 02:41:08.629327 12956 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
I0226 02:41:08.629482 12956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0226 02:41:08.729705 12956 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 10:41:08.718875515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0226 02:41:08.750761 12956 out.go:177] * Using the docker driver based on user configuration
I0226 02:41:08.792844 12956 start.go:299] selected driver: docker
I0226 02:41:08.792863 12956 start.go:903] validating driver "docker" against <nil>
I0226 02:41:08.792877 12956 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0226 02:41:08.797459 12956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0226 02:41:08.896745 12956 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-26 10:41:08.886452523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0226 02:41:08.896934 12956 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0226 02:41:08.897135 12956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0226 02:41:08.918586 12956 out.go:177] * Using Docker Desktop driver with root privileges
I0226 02:41:08.939575 12956 cni.go:84] Creating CNI manager for ""
I0226 02:41:08.939599 12956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0226 02:41:08.939613 12956 start_flags.go:323] config:
{Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0226 02:41:08.962703 12956 out.go:177] * Starting control plane node ingress-addon-legacy-138000 in cluster ingress-addon-legacy-138000
I0226 02:41:09.004478 12956 cache.go:121] Beginning downloading kic base image for docker with docker
I0226 02:41:09.025511 12956 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
I0226 02:41:09.067580 12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0226 02:41:09.067623 12956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
I0226 02:41:09.119319 12956 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
I0226 02:41:09.119358 12956 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
I0226 02:41:09.362782 12956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0226 02:41:09.362835 12956 cache.go:56] Caching tarball of preloaded images
I0226 02:41:09.363930 12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0226 02:41:09.384905 12956 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0226 02:41:09.426582 12956 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0226 02:41:10.006171 12956 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0226 02:41:29.116993 12956 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0226 02:41:29.117559 12956 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0226 02:41:29.708034 12956 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0226 02:41:29.708295 12956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json ...
I0226 02:41:29.708322 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json: {Name:mk8ac7ec1fa1fe03846778d935a41a5d30088c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:29.709980 12956 cache.go:194] Successfully downloaded all kic artifacts
I0226 02:41:29.710015 12956 start.go:365] acquiring machines lock for ingress-addon-legacy-138000: {Name:mk92e967781564262689291af39d6cffbe63fff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0226 02:41:29.710229 12956 start.go:369] acquired machines lock for "ingress-addon-legacy-138000" in 203.659µs
I0226 02:41:29.710420 12956 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0226 02:41:29.710476 12956 start.go:125] createHost starting for "" (driver="docker")
I0226 02:41:29.736485 12956 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0226 02:41:29.736756 12956 start.go:159] libmachine.API.Create for "ingress-addon-legacy-138000" (driver="docker")
I0226 02:41:29.736797 12956 client.go:168] LocalClient.Create starting
I0226 02:41:29.736968 12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem
I0226 02:41:29.737056 12956 main.go:141] libmachine: Decoding PEM data...
I0226 02:41:29.737086 12956 main.go:141] libmachine: Parsing certificate...
I0226 02:41:29.737188 12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem
I0226 02:41:29.737262 12956 main.go:141] libmachine: Decoding PEM data...
I0226 02:41:29.737282 12956 main.go:141] libmachine: Parsing certificate...
I0226 02:41:29.757698 12956 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-138000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0226 02:41:29.809760 12956 cli_runner.go:211] docker network inspect ingress-addon-legacy-138000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0226 02:41:29.809871 12956 network_create.go:281] running [docker network inspect ingress-addon-legacy-138000] to gather additional debugging logs...
I0226 02:41:29.809888 12956 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-138000
W0226 02:41:29.860297 12956 cli_runner.go:211] docker network inspect ingress-addon-legacy-138000 returned with exit code 1
I0226 02:41:29.860349 12956 network_create.go:284] error running [docker network inspect ingress-addon-legacy-138000]: docker network inspect ingress-addon-legacy-138000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-138000 not found
I0226 02:41:29.860371 12956 network_create.go:286] output of [docker network inspect ingress-addon-legacy-138000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-138000 not found
** /stderr **
I0226 02:41:29.860544 12956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0226 02:41:29.911405 12956 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00228f180}
I0226 02:41:29.911449 12956 network_create.go:124] attempt to create docker network ingress-addon-legacy-138000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0226 02:41:29.911519 12956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 ingress-addon-legacy-138000
I0226 02:41:29.999005 12956 network_create.go:108] docker network ingress-addon-legacy-138000 192.168.49.0/24 created
I0226 02:41:29.999054 12956 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-138000" container
I0226 02:41:29.999170 12956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0226 02:41:30.049038 12956 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-138000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --label created_by.minikube.sigs.k8s.io=true
I0226 02:41:30.100416 12956 oci.go:103] Successfully created a docker volume ingress-addon-legacy-138000
I0226 02:41:30.100576 12956 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-138000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --entrypoint /usr/bin/test -v ingress-addon-legacy-138000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
I0226 02:41:30.524487 12956 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-138000
I0226 02:41:30.524530 12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0226 02:41:30.524546 12956 kic.go:194] Starting extracting preloaded images to volume ...
I0226 02:41:30.524656 12956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-138000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
I0226 02:41:33.272101 12956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-138000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (2.747380863s)
I0226 02:41:33.272128 12956 kic.go:203] duration metric: took 2.747584 seconds to extract preloaded images to volume
I0226 02:41:33.272245 12956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0226 02:41:33.374571 12956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-138000 --name ingress-addon-legacy-138000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-138000 --network ingress-addon-legacy-138000 --ip 192.168.49.2 --volume ingress-addon-legacy-138000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
I0226 02:41:33.638000 12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Running}}
I0226 02:41:33.690776 12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
I0226 02:41:33.745393 12956 cli_runner.go:164] Run: docker exec ingress-addon-legacy-138000 stat /var/lib/dpkg/alternatives/iptables
I0226 02:41:33.876006 12956 oci.go:144] the created container "ingress-addon-legacy-138000" has a running status.
I0226 02:41:33.876053 12956 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa...
I0226 02:41:33.989014 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0226 02:41:33.989117 12956 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0226 02:41:34.050381 12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
I0226 02:41:34.105577 12956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0226 02:41:34.105602 12956 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-138000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0226 02:41:34.216841 12956 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-138000 --format={{.State.Status}}
I0226 02:41:34.267859 12956 machine.go:88] provisioning docker machine ...
I0226 02:41:34.267919 12956 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-138000"
I0226 02:41:34.268023 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:34.319800 12956 main.go:141] libmachine: Using SSH client type: native
I0226 02:41:34.320035 12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil> [] 0s} 127.0.0.1 58410 <nil> <nil>}
I0226 02:41:34.320054 12956 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-138000 && echo "ingress-addon-legacy-138000" | sudo tee /etc/hostname
I0226 02:41:34.477398 12956 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-138000
I0226 02:41:34.477497 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:34.528777 12956 main.go:141] libmachine: Using SSH client type: native
I0226 02:41:34.528968 12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil> [] 0s} 127.0.0.1 58410 <nil> <nil>}
I0226 02:41:34.528983 12956 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-138000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-138000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-138000' | sudo tee -a /etc/hosts;
fi
fi
I0226 02:41:34.662731 12956 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0226 02:41:34.662758 12956 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18222-9538/.minikube CaCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18222-9538/.minikube}
I0226 02:41:34.662787 12956 ubuntu.go:177] setting up certificates
I0226 02:41:34.662798 12956 provision.go:83] configureAuth start
I0226 02:41:34.662870 12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
I0226 02:41:34.713998 12956 provision.go:138] copyHostCerts
I0226 02:41:34.714042 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
I0226 02:41:34.714097 12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem, removing ...
I0226 02:41:34.714107 12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem
I0226 02:41:34.714256 12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.pem (1082 bytes)
I0226 02:41:34.714435 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
I0226 02:41:34.714463 12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem, removing ...
I0226 02:41:34.714468 12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem
I0226 02:41:34.714582 12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/cert.pem (1123 bytes)
I0226 02:41:34.714743 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
I0226 02:41:34.714782 12956 exec_runner.go:144] found /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem, removing ...
I0226 02:41:34.714787 12956 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem
I0226 02:41:34.714863 12956 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18222-9538/.minikube/key.pem (1675 bytes)
I0226 02:41:34.715042 12956 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-138000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-138000]
I0226 02:41:34.918394 12956 provision.go:172] copyRemoteCerts
I0226 02:41:34.919121 12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0226 02:41:34.919187 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:34.969809 12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
I0226 02:41:35.071026 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0226 02:41:35.071087 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0226 02:41:35.113193 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem -> /etc/docker/server.pem
I0226 02:41:35.113283 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0226 02:41:35.154779 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0226 02:41:35.154841 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0226 02:41:35.196322 12956 provision.go:86] duration metric: configureAuth took 533.510571ms
I0226 02:41:35.196344 12956 ubuntu.go:193] setting minikube options for container-runtime
I0226 02:41:35.196493 12956 config.go:182] Loaded profile config "ingress-addon-legacy-138000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0226 02:41:35.196565 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:35.247167 12956 main.go:141] libmachine: Using SSH client type: native
I0226 02:41:35.247359 12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil> [] 0s} 127.0.0.1 58410 <nil> <nil>}
I0226 02:41:35.247374 12956 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0226 02:41:35.384528 12956 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0226 02:41:35.384548 12956 ubuntu.go:71] root file system type: overlay
I0226 02:41:35.384667 12956 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0226 02:41:35.384749 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:35.435243 12956 main.go:141] libmachine: Using SSH client type: native
I0226 02:41:35.435442 12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil> [] 0s} 127.0.0.1 58410 <nil> <nil>}
I0226 02:41:35.435494 12956 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0226 02:41:35.593641 12956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0226 02:41:35.593742 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:35.644499 12956 main.go:141] libmachine: Using SSH client type: native
I0226 02:41:35.644678 12956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6c88920] 0x6c8b680 <nil> [] 0s} 127.0.0.1 58410 <nil> <nil>}
I0226 02:41:35.644694 12956 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0226 02:41:36.284982 12956 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-02-06 21:12:51.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-26 10:41:35.588250588 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0226 02:41:36.285003 12956 machine.go:91] provisioned docker machine in 2.017110554s
I0226 02:41:36.285014 12956 client.go:171] LocalClient.Create took 6.548209322s
I0226 02:41:36.285031 12956 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-138000" took 6.548279915s
I0226 02:41:36.285039 12956 start.go:300] post-start starting for "ingress-addon-legacy-138000" (driver="docker")
I0226 02:41:36.285046 12956 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0226 02:41:36.285104 12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0226 02:41:36.285174 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:36.336735 12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
I0226 02:41:36.440492 12956 ssh_runner.go:195] Run: cat /etc/os-release
I0226 02:41:36.444775 12956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0226 02:41:36.444801 12956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0226 02:41:36.444808 12956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0226 02:41:36.444813 12956 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0226 02:41:36.444823 12956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/addons for local assets ...
I0226 02:41:36.444911 12956 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18222-9538/.minikube/files for local assets ...
I0226 02:41:36.445372 12956 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> 100262.pem in /etc/ssl/certs
I0226 02:41:36.445387 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> /etc/ssl/certs/100262.pem
I0226 02:41:36.445592 12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0226 02:41:36.460969 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /etc/ssl/certs/100262.pem (1708 bytes)
I0226 02:41:36.502415 12956 start.go:303] post-start completed in 217.354587ms
I0226 02:41:36.503053 12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
I0226 02:41:36.554018 12956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/config.json ...
I0226 02:41:36.554686 12956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0226 02:41:36.554758 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:36.605003 12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
I0226 02:41:36.695934 12956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0226 02:41:36.701011 12956 start.go:128] duration metric: createHost completed in 6.990524768s
I0226 02:41:36.701031 12956 start.go:83] releasing machines lock for "ingress-addon-legacy-138000", held for 6.990796903s
I0226 02:41:36.701115 12956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-138000
I0226 02:41:36.751911 12956 ssh_runner.go:195] Run: cat /version.json
I0226 02:41:36.751988 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:36.752494 12956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0226 02:41:36.752739 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:36.805879 12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
I0226 02:41:36.806020 12956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58410 SSHKeyPath:/Users/jenkins/minikube-integration/18222-9538/.minikube/machines/ingress-addon-legacy-138000/id_rsa Username:docker}
I0226 02:41:36.896719 12956 ssh_runner.go:195] Run: systemctl --version
I0226 02:41:36.997165 12956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0226 02:41:37.003286 12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0226 02:41:37.046010 12956 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0226 02:41:37.046073 12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0226 02:41:37.075530 12956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0226 02:41:37.104593 12956 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0226 02:41:37.104613 12956 start.go:475] detecting cgroup driver to use...
I0226 02:41:37.104625 12956 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0226 02:41:37.104726 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0226 02:41:37.132785 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0226 02:41:37.149959 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0226 02:41:37.167086 12956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0226 02:41:37.167138 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0226 02:41:37.183103 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0226 02:41:37.199898 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0226 02:41:37.216914 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0226 02:41:37.232787 12956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0226 02:41:37.249132 12956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0226 02:41:37.265946 12956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0226 02:41:37.282034 12956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0226 02:41:37.297098 12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0226 02:41:37.359583 12956 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0226 02:41:37.453036 12956 start.go:475] detecting cgroup driver to use...
I0226 02:41:37.453057 12956 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0226 02:41:37.453110 12956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0226 02:41:37.471853 12956 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0226 02:41:37.471916 12956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0226 02:41:37.491019 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0226 02:41:37.522894 12956 ssh_runner.go:195] Run: which cri-dockerd
I0226 02:41:37.527486 12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0226 02:41:37.543712 12956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0226 02:41:37.574316 12956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0226 02:41:37.640261 12956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0226 02:41:37.721344 12956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0226 02:41:37.721417 12956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0226 02:41:37.750532 12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0226 02:41:37.813807 12956 ssh_runner.go:195] Run: sudo systemctl restart docker
I0226 02:41:38.063969 12956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0226 02:41:38.085764 12956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0226 02:41:38.157718 12956 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
I0226 02:41:38.157814 12956 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-138000 dig +short host.docker.internal
I0226 02:41:38.271428 12956 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0226 02:41:38.271908 12956 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0226 02:41:38.276442 12956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0226 02:41:38.294118 12956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-138000
I0226 02:41:38.363031 12956 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0226 02:41:38.363129 12956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0226 02:41:38.381370 12956 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0226 02:41:38.381383 12956 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0226 02:41:38.381441 12956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0226 02:41:38.396873 12956 ssh_runner.go:195] Run: which lz4
I0226 02:41:38.401166 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0226 02:41:38.401456 12956 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0226 02:41:38.405696 12956 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0226 02:41:38.405722 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0226 02:41:45.081065 12956 docker.go:649] Took 6.679828 seconds to copy over tarball
I0226 02:41:45.081136 12956 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0226 02:41:46.784820 12956 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.703664141s)
I0226 02:41:46.784845 12956 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0226 02:41:46.838243 12956 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0226 02:41:46.853946 12956 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0226 02:41:46.883619 12956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0226 02:41:46.945748 12956 ssh_runner.go:195] Run: sudo systemctl restart docker
I0226 02:41:48.289435 12956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.343665647s)
I0226 02:41:48.289539 12956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0226 02:41:48.306072 12956 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0226 02:41:48.306094 12956 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0226 02:41:48.306102 12956 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0226 02:41:48.311155 12956 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0226 02:41:48.311448 12956 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0226 02:41:48.312194 12956 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0226 02:41:48.312221 12956 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0226 02:41:48.312264 12956 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0226 02:41:48.313284 12956 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0226 02:41:48.313446 12956 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0226 02:41:48.313606 12956 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0226 02:41:48.317125 12956 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0226 02:41:48.318490 12956 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0226 02:41:48.319108 12956 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0226 02:41:48.319567 12956 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0226 02:41:48.319717 12956 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0226 02:41:48.319794 12956 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0226 02:41:48.319953 12956 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0226 02:41:48.320421 12956 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0226 02:41:50.287310 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0226 02:41:50.304715 12956 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0226 02:41:50.304750 12956 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0226 02:41:50.304806 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0226 02:41:50.322604 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0226 02:41:50.349691 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0226 02:41:50.370187 12956 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0226 02:41:50.370214 12956 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0226 02:41:50.370268 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0226 02:41:50.387021 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0226 02:41:50.410336 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0226 02:41:50.412691 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0226 02:41:50.421526 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0226 02:41:50.429881 12956 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0226 02:41:50.429917 12956 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0226 02:41:50.429986 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0226 02:41:50.432394 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0226 02:41:50.432504 12956 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0226 02:41:50.432532 12956 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0226 02:41:50.432575 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0226 02:41:50.437722 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0226 02:41:50.442709 12956 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0226 02:41:50.442741 12956 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0226 02:41:50.442830 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0226 02:41:50.451329 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0226 02:41:50.452666 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0226 02:41:50.452829 12956 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0226 02:41:50.452866 12956 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0226 02:41:50.452967 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0226 02:41:50.462701 12956 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0226 02:41:50.462735 12956 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0226 02:41:50.462791 12956 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0226 02:41:50.466382 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0226 02:41:50.476102 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0226 02:41:50.480495 12956 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0226 02:41:51.136424 12956 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0226 02:41:51.155806 12956 cache_images.go:92] LoadImages completed in 2.849692645s
W0226 02:41:51.155847 12956 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18222-9538/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
I0226 02:41:51.155920 12956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0226 02:41:51.204899 12956 cni.go:84] Creating CNI manager for ""
I0226 02:41:51.204917 12956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0226 02:41:51.204933 12956 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0226 02:41:51.204946 12956 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-138000 NodeName:ingress-addon-legacy-138000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0226 02:41:51.205039 12956 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-138000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0226 02:41:51.205101 12956 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0226 02:41:51.205152 12956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0226 02:41:51.220959 12956 binaries.go:44] Found k8s binaries, skipping transfer
I0226 02:41:51.221011 12956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0226 02:41:51.236801 12956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0226 02:41:51.265980 12956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0226 02:41:51.295339 12956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0226 02:41:51.325584 12956 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0226 02:41:51.329712 12956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0226 02:41:51.346864 12956 certs.go:56] Setting up /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000 for IP: 192.168.49.2
I0226 02:41:51.346886 12956 certs.go:190] acquiring lock for shared ca certs: {Name:mkac1efdcc7c5f1039385f86b148562f7ea05475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.347081 12956 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key
I0226 02:41:51.347148 12956 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key
I0226 02:41:51.347194 12956 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key
I0226 02:41:51.347212 12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt with IP's: []
I0226 02:41:51.450510 12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt ...
I0226 02:41:51.450525 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.crt: {Name:mk162d6e138029c5409501d0c37715272ac2978c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.451195 12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key ...
I0226 02:41:51.451210 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/client.key: {Name:mk64327ecc8de853fa995d7915c82afaca08b48f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.452205 12956 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2
I0226 02:41:51.452233 12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0226 02:41:51.557181 12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 ...
I0226 02:41:51.557196 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2: {Name:mk245917daff69f5757c601893aa6619e282cc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.558128 12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2 ...
I0226 02:41:51.558138 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2: {Name:mkae51405217599d74520e6e64596809250777dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.558772 12956 certs.go:337] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt
I0226 02:41:51.558956 12956 certs.go:341] copying /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key
I0226 02:41:51.559123 12956 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key
I0226 02:41:51.559140 12956 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt with IP's: []
I0226 02:41:51.782261 12956 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt ...
I0226 02:41:51.782277 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt: {Name:mk5a3c2f0dbf18bf73ac054c18315e7a14d6c490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.782934 12956 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key ...
I0226 02:41:51.782945 12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key: {Name:mkc7ee880fc47b164b9c5d34f1104682413a395b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0226 02:41:51.783392 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0226 02:41:51.783422 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0226 02:41:51.783449 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0226 02:41:51.783469 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0226 02:41:51.783487 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0226 02:41:51.783504 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0226 02:41:51.783523 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0226 02:41:51.783539 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0226 02:41:51.783890 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem (1338 bytes)
W0226 02:41:51.783958 12956 certs.go:433] ignoring /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026_empty.pem, impossibly tiny 0 bytes
I0226 02:41:51.783967 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca-key.pem (1675 bytes)
I0226 02:41:51.783999 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/ca.pem (1082 bytes)
I0226 02:41:51.784027 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/cert.pem (1123 bytes)
I0226 02:41:51.784069 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/certs/key.pem (1675 bytes)
I0226 02:41:51.784133 12956 certs.go:437] found cert: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem (1708 bytes)
I0226 02:41:51.784169 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem -> /usr/share/ca-certificates/100262.pem
I0226 02:41:51.784188 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0226 02:41:51.784202 12956 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem -> /usr/share/ca-certificates/10026.pem
I0226 02:41:51.784683 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0226 02:41:51.826749 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0226 02:41:51.868923 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0226 02:41:51.908670 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/profiles/ingress-addon-legacy-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0226 02:41:51.948507 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0226 02:41:51.989144 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0226 02:41:52.029854 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0226 02:41:52.071476 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0226 02:41:52.111911 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/files/etc/ssl/certs/100262.pem --> /usr/share/ca-certificates/100262.pem (1708 bytes)
I0226 02:41:52.153243 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0226 02:41:52.194681 12956 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18222-9538/.minikube/certs/10026.pem --> /usr/share/ca-certificates/10026.pem (1338 bytes)
I0226 02:41:52.235575 12956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0226 02:41:52.265333 12956 ssh_runner.go:195] Run: openssl version
I0226 02:41:52.271596 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10026.pem && ln -fs /usr/share/ca-certificates/10026.pem /etc/ssl/certs/10026.pem"
I0226 02:41:52.287681 12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10026.pem
I0226 02:41:52.292031 12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 26 10:36 /usr/share/ca-certificates/10026.pem
I0226 02:41:52.292081 12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10026.pem
I0226 02:41:52.298626 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10026.pem /etc/ssl/certs/51391683.0"
I0226 02:41:52.315093 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100262.pem && ln -fs /usr/share/ca-certificates/100262.pem /etc/ssl/certs/100262.pem"
I0226 02:41:52.330885 12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100262.pem
I0226 02:41:52.335099 12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 26 10:36 /usr/share/ca-certificates/100262.pem
I0226 02:41:52.335154 12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100262.pem
I0226 02:41:52.341495 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100262.pem /etc/ssl/certs/3ec20f2e.0"
I0226 02:41:52.357293 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0226 02:41:52.373815 12956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0226 02:41:52.377855 12956 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 26 10:29 /usr/share/ca-certificates/minikubeCA.pem
I0226 02:41:52.377896 12956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0226 02:41:52.384159 12956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0226 02:41:52.399866 12956 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0226 02:41:52.404034 12956 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0226 02:41:52.404079 12956 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-138000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0226 02:41:52.404177 12956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0226 02:41:52.420857 12956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0226 02:41:52.435699 12956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0226 02:41:52.450032 12956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0226 02:41:52.450099 12956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0226 02:41:52.465742 12956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0226 02:41:52.465769 12956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0226 02:41:52.518292 12956 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0226 02:41:52.518353 12956 kubeadm.go:322] [preflight] Running pre-flight checks
I0226 02:41:52.754643 12956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0226 02:41:52.754723 12956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0226 02:41:52.754804 12956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0226 02:41:52.967176 12956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0226 02:41:52.967825 12956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0226 02:41:52.967876 12956 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0226 02:41:53.041252 12956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0226 02:41:53.062666 12956 out.go:204] - Generating certificates and keys ...
I0226 02:41:53.062750 12956 kubeadm.go:322] [certs] Using existing ca certificate authority
I0226 02:41:53.062810 12956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0226 02:41:53.117652 12956 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0226 02:41:53.240616 12956 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0226 02:41:53.490408 12956 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0226 02:41:53.651901 12956 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0226 02:41:53.787493 12956 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0226 02:41:53.787621 12956 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0226 02:41:53.919830 12956 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0226 02:41:53.919943 12956 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0226 02:41:54.039569 12956 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0226 02:41:54.245122 12956 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0226 02:41:54.341299 12956 kubeadm.go:322] [certs] Generating "sa" key and public key
I0226 02:41:54.341353 12956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0226 02:41:54.482884 12956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0226 02:41:54.598510 12956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0226 02:41:54.795980 12956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0226 02:41:54.842442 12956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0226 02:41:54.842922 12956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0226 02:41:54.864524 12956 out.go:204] - Booting up control plane ...
I0226 02:41:54.864658 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0226 02:41:54.864779 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0226 02:41:54.864894 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0226 02:41:54.865045 12956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0226 02:41:54.865300 12956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0226 02:42:34.850883 12956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0226 02:42:34.851576 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:42:34.851743 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:42:39.853214 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:42:39.853371 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:42:49.854432 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:42:49.854597 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:43:09.855445 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:43:09.855610 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:43:49.862209 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:43:49.862393 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:43:49.862411 12956 kubeadm.go:322]
I0226 02:43:49.862448 12956 kubeadm.go:322] Unfortunately, an error has occurred:
I0226 02:43:49.862536 12956 kubeadm.go:322] timed out waiting for the condition
I0226 02:43:49.862549 12956 kubeadm.go:322]
I0226 02:43:49.862588 12956 kubeadm.go:322] This error is likely caused by:
I0226 02:43:49.862633 12956 kubeadm.go:322] - The kubelet is not running
I0226 02:43:49.862722 12956 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0226 02:43:49.862730 12956 kubeadm.go:322]
I0226 02:43:49.862819 12956 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0226 02:43:49.862854 12956 kubeadm.go:322] - 'systemctl status kubelet'
I0226 02:43:49.862895 12956 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0226 02:43:49.862904 12956 kubeadm.go:322]
I0226 02:43:49.863015 12956 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0226 02:43:49.863101 12956 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0226 02:43:49.863114 12956 kubeadm.go:322]
I0226 02:43:49.863189 12956 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0226 02:43:49.863241 12956 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0226 02:43:49.863301 12956 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0226 02:43:49.863327 12956 kubeadm.go:322] - 'docker logs CONTAINERID'
I0226 02:43:49.863331 12956 kubeadm.go:322]
I0226 02:43:49.867677 12956 kubeadm.go:322] W0226 10:41:52.517631 1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0226 02:43:49.867838 12956 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0226 02:43:49.867937 12956 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0226 02:43:49.868047 12956 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0226 02:43:49.868159 12956 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0226 02:43:49.868264 12956 kubeadm.go:322] W0226 10:41:54.846425 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0226 02:43:49.868369 12956 kubeadm.go:322] W0226 10:41:54.847975 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0226 02:43:49.868434 12956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0226 02:43:49.868502 12956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0226 02:43:49.868633 12956 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:41:52.517631 1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:41:54.846425 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:41:54.847975 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-138000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:41:52.517631 1763 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:41:54.846425 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:41:54.847975 1763 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0226 02:43:49.868667 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0226 02:43:50.286482 12956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0226 02:43:50.304515 12956 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0226 02:43:50.304571 12956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0226 02:43:50.320041 12956 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0226 02:43:50.320067 12956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0226 02:43:50.370901 12956 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0226 02:43:50.370963 12956 kubeadm.go:322] [preflight] Running pre-flight checks
I0226 02:43:50.607495 12956 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0226 02:43:50.607583 12956 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0226 02:43:50.607661 12956 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0226 02:43:50.767914 12956 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0226 02:43:50.769170 12956 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0226 02:43:50.769289 12956 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0226 02:43:50.845255 12956 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0226 02:43:50.866735 12956 out.go:204] - Generating certificates and keys ...
I0226 02:43:50.866841 12956 kubeadm.go:322] [certs] Using existing ca certificate authority
I0226 02:43:50.866922 12956 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0226 02:43:50.867033 12956 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0226 02:43:50.867128 12956 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0226 02:43:50.867187 12956 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0226 02:43:50.867276 12956 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0226 02:43:50.867370 12956 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0226 02:43:50.867448 12956 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0226 02:43:50.867561 12956 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0226 02:43:50.867634 12956 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0226 02:43:50.867670 12956 kubeadm.go:322] [certs] Using the existing "sa" key
I0226 02:43:50.867718 12956 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0226 02:43:51.084598 12956 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0226 02:43:51.226324 12956 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0226 02:43:51.385157 12956 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0226 02:43:51.828237 12956 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0226 02:43:51.829687 12956 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0226 02:43:51.849576 12956 out.go:204] - Booting up control plane ...
I0226 02:43:51.849653 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0226 02:43:51.849718 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0226 02:43:51.849784 12956 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0226 02:43:51.849865 12956 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0226 02:43:51.849999 12956 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0226 02:44:31.838370 12956 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0226 02:44:31.838699 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:44:31.838852 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:44:36.839843 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:44:36.840005 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:44:46.841271 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:44:46.841439 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:45:06.842310 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:45:06.842456 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:45:46.845228 12956 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0226 02:45:46.845459 12956 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0226 02:45:46.845472 12956 kubeadm.go:322]
I0226 02:45:46.845511 12956 kubeadm.go:322] Unfortunately, an error has occurred:
I0226 02:45:46.845572 12956 kubeadm.go:322] timed out waiting for the condition
I0226 02:45:46.845588 12956 kubeadm.go:322]
I0226 02:45:46.845646 12956 kubeadm.go:322] This error is likely caused by:
I0226 02:45:46.845685 12956 kubeadm.go:322] - The kubelet is not running
I0226 02:45:46.845824 12956 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0226 02:45:46.845842 12956 kubeadm.go:322]
I0226 02:45:46.845972 12956 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0226 02:45:46.846015 12956 kubeadm.go:322] - 'systemctl status kubelet'
I0226 02:45:46.846050 12956 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0226 02:45:46.846055 12956 kubeadm.go:322]
I0226 02:45:46.846194 12956 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0226 02:45:46.846283 12956 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0226 02:45:46.846292 12956 kubeadm.go:322]
I0226 02:45:46.846394 12956 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0226 02:45:46.846464 12956 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0226 02:45:46.846549 12956 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0226 02:45:46.846583 12956 kubeadm.go:322] - 'docker logs CONTAINERID'
I0226 02:45:46.846590 12956 kubeadm.go:322]
I0226 02:45:46.851149 12956 kubeadm.go:322] W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0226 02:45:46.851290 12956 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0226 02:45:46.851355 12956 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0226 02:45:46.851494 12956 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0226 02:45:46.851583 12956 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0226 02:45:46.851690 12956 kubeadm.go:322] W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0226 02:45:46.851812 12956 kubeadm.go:322] W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0226 02:45:46.851886 12956 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0226 02:45:46.851970 12956 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0226 02:45:46.852006 12956 kubeadm.go:406] StartCluster complete in 3m54.448032528s
I0226 02:45:46.853836 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0226 02:45:46.872150 12956 logs.go:276] 0 containers: []
W0226 02:45:46.872165 12956 logs.go:278] No container was found matching "kube-apiserver"
I0226 02:45:46.872229 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0226 02:45:46.888276 12956 logs.go:276] 0 containers: []
W0226 02:45:46.888292 12956 logs.go:278] No container was found matching "etcd"
I0226 02:45:46.888366 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0226 02:45:46.904401 12956 logs.go:276] 0 containers: []
W0226 02:45:46.904416 12956 logs.go:278] No container was found matching "coredns"
I0226 02:45:46.904484 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0226 02:45:46.920274 12956 logs.go:276] 0 containers: []
W0226 02:45:46.920291 12956 logs.go:278] No container was found matching "kube-scheduler"
I0226 02:45:46.920371 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0226 02:45:46.936579 12956 logs.go:276] 0 containers: []
W0226 02:45:46.936595 12956 logs.go:278] No container was found matching "kube-proxy"
I0226 02:45:46.936676 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0226 02:45:46.954227 12956 logs.go:276] 0 containers: []
W0226 02:45:46.954242 12956 logs.go:278] No container was found matching "kube-controller-manager"
I0226 02:45:46.954310 12956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0226 02:45:46.971521 12956 logs.go:276] 0 containers: []
W0226 02:45:46.971537 12956 logs.go:278] No container was found matching "kindnet"
I0226 02:45:46.971545 12956 logs.go:123] Gathering logs for kubelet ...
I0226 02:45:46.971551 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0226 02:45:47.013149 12956 logs.go:123] Gathering logs for dmesg ...
I0226 02:45:47.013167 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0226 02:45:47.033154 12956 logs.go:123] Gathering logs for describe nodes ...
I0226 02:45:47.033169 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0226 02:45:47.097269 12956 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0226 02:45:47.097287 12956 logs.go:123] Gathering logs for Docker ...
I0226 02:45:47.097295 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0226 02:45:47.118608 12956 logs.go:123] Gathering logs for container status ...
I0226 02:45:47.118638 12956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0226 02:45:47.180412 12956 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0226 02:45:47.180436 12956 out.go:239] *
*
W0226 02:45:47.180470 12956 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0226 02:45:47.180485 12956 out.go:239] *
*
W0226 02:45:47.181119 12956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0226 02:45:47.246945 12956 out.go:177]
W0226 02:45:47.289975 12956 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0226 10:43:50.370203 4762 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0226 10:43:51.833273 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0226 10:43:51.834003 4762 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0226 02:45:47.290048 12956 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0226 02:45:47.290095 12956 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0226 02:45:47.332822 12956 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-138000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (279.15s)